The picture is a photograph of Hadrian’s Wall, the Wall the Romans built across what is now England.
The Romans built it to protect Roman Britain from raids by the Picts, the tribes that inhabited what would become Scotland. It marked the northern boundary of the Roman Empire in Britain, the dividing point between the unsettled outlands and the area encompassed by the Roman Peace, the Pax Romana.
You’ll note in the photo that this section of Hadrian’s wall is still standing, though it’s eroded a bit, and may not be doing a great job of controlling those sheep. The guy in the photo is apparently applying weed-killer, which I assume is used to keep weeds from further eroding the wall.
This is a blog about cybercrime, not history, so you may be wondering why I’m rattling on about Hadrian’s Wall and the Pax Romana. This post is going to be different: Instead of talking about a case or statute, I’m going to ruminate on an issue I’m trying to sort out.
The issue is the problem of securing cyberspace. That phrase is not really accurate, unless we think cyberspace is a “place” analogous to Roman Britain, or Roman Gaul or Roman Italy or Roman Africa or the Roman Empire. Each of the constituent areas of the Roman Empire (Italy, Gaul, Africa, Britain) was a “place,” a geographical area with clearly defined boundaries. The Roman Empire was also a “place” – a conglomeration of geographical places, to be precise. The Roman Empire’s ability to secure the areas it controlled created and sustained the Pax Romana, at least until the Empire crumbled.
Cyberspace is in one sense analogous to a geographical “place” – like one of the areas that constituted the Roman Empire, or maybe the Empire itself. We tend to describe our experiencing it in spatial terms: We “go into” cyberspace when we’re online (an analogy that is particularly apt when we’re in virtual worlds like Second Life or playing games like WoW). We refer to “sites” and “locations” in cyberspace, and cyberspace has for over a decade been characterized as a “frontier,” an unexplored “place” we are in the process of settling and civilizing. All of these are examples of the spatial analogies we implicitly rely on when we talk about and conceptualize cyberspace. We use those analogies, I think, because we have no other way of conceptualizing purely non-spatial experiences – Gibson’s collective, consensual hallucination.
Cyberspace has definitely become a venue for human activity, and that brings us to the issue of securing cyberspace: As we all know, and as I’ve written about almost 200 times on this blog, cyberspace can be and is being used to commit criminal activity. As I’ve also noted here before, criminal activity undermines social order; social order is the term used to refer to a social system’s need to maintain order among its constituents, to prevent them from preying on each other. As I’ve noted before here and elsewhere, no social system – no tribe, no empire, no nation-state – can survive if its constituents are free to prey upon – to rape, assault, murder, steal from, etc. – each other. Every social system has to maintain a baseline of internal order if it is to survive; order is essential if people are going to produce and distribute food and the other necessities they require to survive and prosper. It’s also essential for stable reproduction and the socialization of children.
Every social system – tribe, empire, nation-state, etc. – also has to maintain external order – it has to fend off attacks from other tribes, empires, nation-states, etc. They do that by relying on their military; Rome used military force not only to fend off its enemies, but to bring other geographical areas under its control. Having done so, it had to protect those areas from external enemies (Hadrian’s Wall) AND preserve internal order, which it, like very other evolved human social system, did with criminal rules and procedures for enforcing those rules.
But so far every human social system – tribe, city-state, empire, nation-state – has been a closed system: It has occupied a bounded geographical area, and used it military to fend off outside enemies and its law enforcement rules and personnel to keep its own citizens from preying on each other (too much). It is for this reason that we tend to refer to the social systems – the nation-states – that are now dominant as “countries.” Their identity and existence are defined by the territory – the “country” – each controls.
Now we have cyberspace: What is it? Is it a new country, to be conquered and put under the control of one or more of our existing nation-states? Do we somehow carve cyberspace up into regions: U.S. Cyberspace, French Cyberspace, Thai Cyberspace, and so on? Or do we make it a “country” in and of itself? If we do that, who runs it and how do they maintain order “in” cyberspace?
Maintaining order “in” cyberspace is, I submit, an oxymoron, as is “securing cyberspace.” Analogizing cyberspace to a geographical “place” may have its uses, but this is not a productive analogy when it comes to talking about threats in cyberspace and how they impact on our lives in the real-world. The Romans could build Hadrian’s Wall and use it to keep the Picts out of Roman Britannia, but we can’t do that with cyberspace; it is not a separate “place” with its own, indigenous population. It’s “occupants,” if we want to use that term, are transients – real-world citizens who drop in and out of cyberspace. Unless and until we develop the technology and the inclination to decant our consciousnesses into an evolved version of cyberspace, we cannot become permanent residents.
What I find most interesting about cyberspace is that it is, IMHO, breaking down the carefully constructed and maintained boundaries that divide the world into nation-states or “countries.” It’s eroding cultural and social barriers, but it’s also eroding geographical boundaries. As I’ve noted here and elsewhere, the defining characteristic of cybercrime is that it tends to be transnational. Unlike traditional crime (which was face-to-face and therefore physically grounded crime), cybercrime is unbounded crime; a cybercriminal can victimize someone halfway around the globe as easily as he/she can someone who is across the street.
That, as I’ve noted here and written about extensively elsewhere, erodes the efficacy of the systems nation-states use to control crime and keep order. Those systems are set up to deal with internal crime: There’s a crime scene, which is local; there are witnesses who can be interviewed who will know the victim, if not the likely perpetrator, and who can be interviewed to provide information that will help police catch the person who committed the crime. All of these law enforcement systems are organized hierarchically, on a quasi-military model; both the systems that are used to maintain internal order (fight crime) and the systems that are used to maintain external order (keep other countries at bay) are hierarchically organized, territorially-based systems.
Law enforcement officers from Country A can’t simply go into Country B, kick down doors, chase suspects and do whatever else they need to catch a cybercriminal (assuming all of that would be effective in doing so). The same is true of the military: If Country A comes under cyberattack in what seems to be cyberwarfare, instead of cybercrime, the military can’t simply invade Country B, from which the attack seemed to originate. Country B may not actually be the source of the attack; and even if it is, the attack may not be warfare launched by Country B. It might be warfare launched by Country C to get Countries A and B going at each other; or it might be warfare launched by a group of non-nation-state actors who either consider themselves capable of waging war on their own or are interested in getting Countries A and B to attack each other.)
I’ve written more about the cumbersome nature of the systems nation-states use to maintain order (and do pretty much everything) elsewhere. You can find a link to one of those articles here, if you’re interested.
My point is that I don’t think the approach we have used is working, and can work, in a world in which computer technology increasingly makes geography irrelevant. Like others, I wonder if we will not see the nation-state disappear as the basic social system . . . to be replaced by . . . what?
I am becoming increasingly convinced that the territorially-based, hierarchically-organized (and often ridiculously huge) agencies we rely on to secure the real-world areas controlled by nation-states are completely unsuited for, and incapable of, dealing with cyberthreats. I keep coming back to a book I read several years ago, as a source of an analogy for what MIGHT be happening.
The book (the title escapes me, sorry) is about a man, a fairly wealthy and powerful man, who lives with his family in Provence (now part of France) in the period when the Roman Empire is beginning to fail. He’s a native of the area, but has become a Roman citizen, and like all the citizens of the Empire he’s used to peace and security. There were dustups at times, but the world was, and had for a long time been, stable.
As you read the book, you realize what it was like to live through the early decline and then disintegration of the Roman Empire, the entity that had kept everyone safe. It’s easy for us, looking back, to point to things and note how obvious it was that the system was beginning to fail. It’s a lot harder to do that when you’re in the system, you’re used to it and you need it, because if it fails, you’re in trouble. As you read the book, the man it’s about slowly begins to realize that his world is ending, and he’s going to have to figure out how to deal with it. The bad guys are becoming bolder, because the systems that used to keep the in check are not working so well, anymore.
I’m not saying we’re Rome, and that everything is going to fall apart in the near future. What I AM saying is that I think we’re a little like the people who closed their eyes when it began to become apparent that what had been Rome wasn’t going to work any more. They may not have had any other choice, but I think we do. And I don’t think endless conferences and press releases touting law enforcement’s success against child pornography and the odd, inept local cybercriminal and all the other “noise” about how well governments are dealing with cybercrime are getting us anywhere. The world is changing, and I don’t see why we can’t accept that and try to adapt to it, instead of letting change happen and reacting to it.
Friday, August 29, 2008
Wednesday, August 27, 2008
Defamation, Harassment and the First Amendment
I’ve written about criminal defamation and harassment before, but here I want to summarize the result in a civil defamation case out of California . . . because it’s a little depressing.
Here are the facts, as reported by the California Court of Appeals in Evans v. Evans, 162 Cal. App.4th 1157, 76 Cal. Rptr.3d 859 (2008):
With regard to Linda’s publishing “false and defamatory statements,” online, the court of appeals held that the injunction was fatally flawed for two reasons:
The court of appeals reached essentially the same conclusions with regard to the injunction’s prohibiting the publication of “confidential personal information” online:
It seems the lower court and Thomas’ lawyer (Linda didn’t have one) were at fault for not specifying in more detail what kind of information was not to be put online. I assume it won’t be difficult for Thomas and his lawyer to do that on their next try. Of course, I get the impression it may already have been out there for a while, but maybe he moved.
This case is an object lesson in what can happen if someone is really angry at you and knows how to use the web to take out their anger.
Here are the facts, as reported by the California Court of Appeals in Evans v. Evans, 162 Cal. App.4th 1157, 76 Cal. Rptr.3d 859 (2008):
Thomas [Evans] is a[n] officer with the San Diego County Sheriff's Department. He and Linda were married in 1985, and separated in 1998. In 2002, the court entered a judgment dissolving the marriage. During the next five years, the parties had substantial ongoing conflict over custody, child support and other issues. . . .Evans v. Evans, supra. Linda appealed the order, and won. The California Court of Appeals held that the preliminary injunction violated the First Amendment.
In March 2007, Thomas filed a complaint against Linda, alleging harassment, slander and defamation. . . [and] breach of privacy. . . . . The gist . . . was that Linda has engaged in a series of acts intended to harass Thomas and cause him severe emotional stress and injury to his reputation and career. . . .
Thomas moved for a . . . preliminary injunction . . . enjoining Linda `from engaging in the slanderous and harassing conduct against him. In support, Thomas relied primarily on his own declarations in which he asserted . . .[that] Linda and her mother (Preddy) had placed defamatory information about him on the Internet. . . .
Thomas stated that: `In December 2006, I was informed that there were websites posted by [Linda and Preddy] with numerous defaming comments and statements about me as a sworn law enforcement officer. . . .” Thomas also said he `discovered . . . [Preddy] inappropriately gained access to both my . . . medical . . . and financial records, and had published information from them on the internet.’ [He] attached . . . Web site pages showing statements that appeared to have been made by Preddy in a family court declaration, accusing [him] of physical abuse and harassment against Linda. [He] did not submit any evidence that any private medical or financial information or identifying . . . facts had been published on the Internet.
[He] also submitted . . . Web pages in which. . . Linda . . . accus[ed] him of physical abuse against her and her son. . . . Thomas stated `[a]s recently as February 19, 2007, a Google search of my name on thepetitionsite.com generated a blurb posted by [Linda] stating: ‘Our eldest son was returned to my `Primary Care' after his father, San Diego County Sheriff's Sergeant, Thomas C. Evans, struck him with a belt repeatedly. . . .’ This statement is entirely false and reflective of the defamatory and harassing comments published by the defendants.’
Thomas declared: “I strongly believe that the actions of [Linda and Preddy] have affected, and will continue to affect, my reputation, career, and general well-being. . . . The actions ... have caused me, and continue to cause me, substantial emotional distress as I fear for my reputation, my relationships with friends and family, and my career with the San Diego Sheriff's Department.’ Thomas argued a preliminary injunction was necessary to prevent further wrongful `conduct that would only serve to negatively impact my personal and professional life’. . . .
Linda . . . filed a . . . pleading, denying each of Thomas's allegations. . . . But she did not present any evidence to counter Thomas's evidence. One week later, on April 13, 2007, the court held a hearing on Thomas's preliminary injunction motion. . . .
At the hearing, the court [found that the] injunction was `more than warranted’, . . . ruling . . . . that `there is a reasonable probability [Thomas] will prevail on the merits of this action. [Thomas] has provided . . . sufficient evidence to establish the ongoing harassment activities by [Linda and Preddy]. Moreover, . . . [Thomas] may suffer irreparable harm if [they] are not: 1) enjoined from publishing false and defamatory statements and/or confidential personal information about him on the internet; and 2) enjoined from contacting [his] employer via email or otherwise regarding [him].’ . . .
Five days later, . . . the court issued the preliminary injunction challenged in this appeal. The injunction stated: “1. [Linda and Preddy] are enjoined from publishing false and defamatory statements and/or confidential personal information about [Thomas] on the internet; and 2. [Linda and Preddy] are enjoined from contacting [Thomas's] employer via e-mail or otherwise regarding [Thomas]. Since [Thomas] is employed by the San Diego Sheriff's Department, this injunction should not be construed to prohibit defendants from calling 911 to report criminal conduct.”
With regard to Linda’s publishing “false and defamatory statements,” online, the court of appeals held that the injunction was fatally flawed for two reasons:
[T]he preliminary injunction prohibiting Linda from publishing any `false and defamatory’ statements on the Internet is constitutionally invalid. Because there has been no trial and no determination on the merits that any statement made by Linda was defamatory, the court cannot prohibit her from making statements characterized only as `false and defamatory.’ . . .Evans v. Evans, supra. I understand what the court is saying, and I think it’s probably correct, as a matter of constitutional law. But it means you can’t do anything to stop the publication of “false and defamatory” statements until you’ve filed a complaint, taken the case to trial and won . . . which could be a long, long time. . . . well after the statements have done their damage.
This portion of the order is also . . .unconstitutionally vague and overbroad. The injunction broadly prohibited Linda from publishing any defamatory comments about Thomas. This sweeping prohibition fails to adequately delineate which of Linda's future comments might violate the injunction and lead to contempt of court. . . .
[O]ur conclusion should not be interpreted as an opinion on the merits of Thomas's . . . claims. It is well settled that a plaintiff may recover damages for speech that is proved to be defamatory or libelous. Additionally, a court may enjoin a defendant after trial from repeating defamatory statements. The only issue resolved here is that a court may not constitutionally prevent a person from uttering a `defamatory’ statement before it has been determined at trial that the statement was defamatory.
The court of appeals reached essentially the same conclusions with regard to the injunction’s prohibiting the publication of “confidential personal information” online:
A prohibition against disclosing confidential information constitutes a prior restraint. . . . However, because it . . . concerns the right of privacy under the California Constitution, a prohibition may be proper under certain compelling . . .circumstances.Evans v. Evans, supra. So Thomas may be able to get an injunction barring Linda from posting his phone number and address online, and maybe his Social Security number. Now he has to go back and try to get a court to do that.
In determining whether such circumstances exist, courts . . .apply a balancing test, weighing the competing privacy and free speech . . .l rights. . . . Relevant factors include whether the person is a public or private figure, the scope of the prior restraint, the nature of the private information, whether the information is of legitimate public concern, the extent of the potential harm if [it] is disclosed, and the strength of the private and governmental interest in preventing publication. . .
We cannot determine whether the court properly applied the balancing test in this case because the order. . . . does not contain a definition of `confidential personal information’. . . . Without a definition, the injunction is not sufficiently clear to determine whether Thomas's privacy rights outweigh Linda's free speech rights. . . .
Thomas [says] Linda will place (or has placed) his telephone number, address, and Social Security number on the Internet. He argues the disclosure of the information will put his safety and well-being in jeopardy, . . . because of his job as a deputy sheriff. We agree a court would be fully justified in . . . preventing a party from putting this type of identifying information about a person on the Internet, particularly where . . . that person is a law enforcement officer. . . . Such a restriction does not involve information that has any public value and would serve the significant public interest of protecting the safety of a law enforcement officer. . . .
Thomas did not specifically request an order preventing his identifying information from being placed on the Internet. Instead, [he] focused primarily on his concern that Linda and/or her mother had placed, or planned to place, information about the divorce . . . on the Internet. . . . [T]he mere fact information is contained in court files does not necessarily mean it . . . cannot be disclosed. . . .[C]ertain information . . . may be protected from disclosure, such as information . . . that would compromise a person's financial security or personal safety. . . . But an order enjoining the disclosure must be narrowly tailored to protect only these specific interests and should not unnecessarily interfere with a person's free speech rights. . . .
[[T]he order preventing Linda from placing any `confidential personal information’ about Thomas on the Internet is vague, overbroad, and not narrowly tailored. On remand, the court should . . . determine whether there is a compelling reason such information be kept private. A compelling reason includes . . . facts showing the disclosure of information would jeopardize the personal safety of Thomas or his family and/or would lead him to fear for his or his family's safety. If a compelling reason exists, the court should . . . enjoin Linda from publishing the information.
It seems the lower court and Thomas’ lawyer (Linda didn’t have one) were at fault for not specifying in more detail what kind of information was not to be put online. I assume it won’t be difficult for Thomas and his lawyer to do that on their next try. Of course, I get the impression it may already have been out there for a while, but maybe he moved.
This case is an object lesson in what can happen if someone is really angry at you and knows how to use the web to take out their anger.
Monday, August 25, 2008
Computer Fraud and Conspiracy . . . ?
Conspiracy is a traditional common law crime, one that has become very popular in modern U.S. law. Learned Hand, a distinguished federal judge for almost forty years, said conspiracy was “the darling of the modern prosecutor’s nursery” because it is used so often and in so many ways.
Sometimes I think you can say that of computer crime statutes, as well. Some of them are used in novel and, on occasion, at least arguably improper ways (like the federal computer crime charge against Lori Drew for her contribution to Megan Meier’s suicide).
Today I want to write about an interesting case from Louisiana that combines conspiracy and computer crime. I have no problem concluding that what the defendant did was “wrong,” was a crime, but I think the charges brought against her are interesting . . . I’m not sure they would have been my first choice.
The case is from New Orleans. Here is a summary of the facts, taken from a judicial opinion and a news story: Five years ago, Glenda Spears, a criminal defense attorney, and Angela Kirkland, a probation officer, came up with a scheme to get judges to release probationers who paid them off. The scheme targeted probationers who faced court-ordered drug testing or other conditions of their probation.
According to the indictment returned in the case, Kirkland, a former drug court counselor, would urge drug court probationers to hire Glenda Spears as their lawyer "if they wanted to be released from probation." Spears would charge each probationer a fee for obtaining his or her release, and give half of the bribe to Kirkland, according to the indictment. The indictment says Kirkland would then recommend to the judge that the probationer be released. As the local U.S. Attorney noted after the scheme came to light, "Both state and federal court judges routinely follow the recommendations of probation officers whom they believe are acting in good faith,” so the scheme worked.
(For more detail on the facts, see Gwen Filosa, Lawyer, Ex-Drug Court Worker Indicted, New Orleans Times Picayune 1 (August 13, 2004) and In re Spears, 964 So.2d 293 (Supreme Court of Louisiana 2007).
The scheme began to unravel in April of 2004, when the Chief Judge of the Orleans Parish Criminal District Court told the FBI that someone who was “on drug probation was complaining that his probation officer, Angela Kirkland, was pressuring him to pay $500 in order to be released from probation and aftercare.” In re Spears, supra. The man who was on probation
Sounds like a local bribery (and maybe some kind of obstruction of justice) case, doesn’t it? Well, it became a federal computer crime case. Why, you ask? The “why” really encompasses two issues: The first is why a FEDERAL case. The other is why a federal COMPUTER CRIME case, instead of a federal bribery case.
As to the first question, I can only speculate, but it’s pretty common in a case of criminal corruption in a local justice system for the crimes to be charged federally, instead of locally, for some practical reasons. It’s easier for federal authorities to prosecute local officials because they don’t work in the same system and haven’t been colleagues of the people who are being prosecuted. It can also enhance the appearance of fairness in the proceeding because the federal prosecutors who bring the case and the federal judge who handles it have no ties to the defendants -- no reason not to be perfectly impartial. That might have been a significant factor in this case because Spears “is the sister of lawyer and former Judge Ike Spears, and the sister-in-law of 1st City Court Judge Sonja Spears.” Gwen Filosa, Lawyer, Ex-Drug Court Worker Indicted, supra.
But what we really care about is the second question. There are federal bribery statutes, so charges might have worked here, but maybe not. Maybe neither Spears or Kirkland qualify as “public officials” under 18 U.S. Code § 201, which is the basic federal bribery statute. That could explain why this became a federal computer crimes case.
That is what it became: Spears was charged with computer fraud under 18 U.S. Code § 1030(a)(4) and with conspiracy to commit computer fraud under 18 U.S. Code § 371. (I’m not sure about Kirkland; I can’t find any reports of what she was charged with, if anything.)
Section 1030(a)(4) makes it a federal crime to “knowingly and with intent to defraud, access a . . . computer without authorization, or exceed authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value”. The prosecution’s theory was that Spears “actions constituted computer fraud because she affected the Orleans Parish Criminal District Court Docket Master Computer, where all entries involving a defendant's case are maintained.” In re Spears, supra. The computer “provides case numbers, defendants’ names, charges, court minutes and other key information related to” charges before this court. Gwen Filosa, Lawyer, Ex-Drug Court Worker Indicted, supra.
Where was the fraud, you ask? Well, the prosecution’s theory was that Spears and Kirkland put false information into the computer "`to release a person from probation for the personal gain of something of value.’" Gwen Filosa, Lawyer, Ex-Drug Court Worker Indicted, supra. The result is fraud, and since they used a computer, however tangentially, to carry out the fraud their actions constituted federal computer fraud, which is punishable by a fine and up to 5 years in prison.
The conspiracy charge under 18 U.S. Code § 371 is very simple: Section 371 makes it a crime to conspire to commit a federal crime. Conspiracy is, in essence, a criminal contract: The crime is committed when two or more people agree to the commission of a crime, a federal crime in this instance. Since what Spears and Kirkland did constituted federal computer crime (under the prosecution’s theory), and since they both agreed to commit the crime, we have conspiracy to commit computer fraud, which adds another 5 years in prison to the punishment Spears would get if convicted.
Not surprisingly, given all those recorded phone calls and meetings, Spears pled guilty. In a stroke of irony, she was sentenced to 3 years probation. She was put on home detention for 6 months, and required to wear an electronic monitoring device. Spears was also fined $10,000 and ordered to perform 200 hours of community service after she completed home detention. In re Spears, supra. She was permanently disbarred by the Louisiana Supreme Court, which means she’ll never practice law there (or probably anywhere else) again.
My focus, though, is not on the penalty she received (which actually doesn’t seem like that much), but on the charges. This is an instructive case: Note, first, how what doesn’t really SEEM like a federal computer crime case can become one; the incidental use of the court’s computer transformed what I see as a state bribery case into a computer crime case. That’s one of the interesting aspects of federal criminal law: It can have a very expansive reach.
Also note how useful conspiracy is. Perhaps you see why Learned Hand called it the prosecutor’s “darling:” If the commission of a crime involves 2 or more people, you pretty much have an automatic conspiracy charge, which ratchets up the penalty and has certain other consequences, such as letting a lot of people be tried together. If the evidence hadn’t been as strong and if Kirkland hadn’t cut a deal, the two might well have gone to trial . . . together, which might well have meant that they were pointing fingers at each other, trying to save themselves by dooming the other. That’s often an inevitable, perhaps irresistible, strategy, but it only works to the prosecution’s advantage. If the defendants are blaming each other, they both look bad.
On a side note, it’s really depressing and aggravating to see a lawyer do something like this.
Sometimes I think you can say that of computer crime statutes, as well. Some of them are used in novel and, on occasion, at least arguably improper ways (like the federal computer crime charge against Lori Drew for her contribution to Megan Meier’s suicide).
Today I want to write about an interesting case from Louisiana that combines conspiracy and computer crime. I have no problem concluding that what the defendant did was “wrong,” was a crime, but I think the charges brought against her are interesting . . . I’m not sure they would have been my first choice.
The case is from New Orleans. Here is a summary of the facts, taken from a judicial opinion and a news story: Five years ago, Glenda Spears, a criminal defense attorney, and Angela Kirkland, a probation officer, came up with a scheme to get judges to release probationers who paid them off. The scheme targeted probationers who faced court-ordered drug testing or other conditions of their probation.
According to the indictment returned in the case, Kirkland, a former drug court counselor, would urge drug court probationers to hire Glenda Spears as their lawyer "if they wanted to be released from probation." Spears would charge each probationer a fee for obtaining his or her release, and give half of the bribe to Kirkland, according to the indictment. The indictment says Kirkland would then recommend to the judge that the probationer be released. As the local U.S. Attorney noted after the scheme came to light, "Both state and federal court judges routinely follow the recommendations of probation officers whom they believe are acting in good faith,” so the scheme worked.
(For more detail on the facts, see Gwen Filosa, Lawyer, Ex-Drug Court Worker Indicted, New Orleans Times Picayune 1 (August 13, 2004) and In re Spears, 964 So.2d 293 (Supreme Court of Louisiana 2007).
The scheme began to unravel in April of 2004, when the Chief Judge of the Orleans Parish Criminal District Court told the FBI that someone who was “on drug probation was complaining that his probation officer, Angela Kirkland, was pressuring him to pay $500 in order to be released from probation and aftercare.” In re Spears, supra. The man who was on probation
agreed to cooperate with law enforcement and to record his conversations with Ms. Kirkland. On April 26, 2004, the probationer met Ms. Kirkland, and in a recorded conversation, she accepted $360, a portion of the requested $500 payment. During the conversation, Ms. Kirkland told the probationer that as a result of the payment he would be released from drug court probation and aftercare. Shortly thereafter, Ms. Kirkland recommended to the sentencing judge that the defendant be released from probation and aftercare, and he was released.In re Spears, supra. The investigation continued, as FBI agents and officers from the
New Orleans Police Department (NOPD) began interviewing other[s] . . . who had been released from probation. This . . . revealed that Spears, who was then employed by the Orleans Indigent Defender Program had received a $500 payment from a probationer on November 15, 2003. Spears split this fee with Ms. Kirkland, who . . . recommended to the court that the probationer be released from probation. The judge accepted this recommendation, and the probationer was released from probation.In re Spears, supra.
On July 12, 2004, the FBI and the NOPD approached Ms. Kirkland, who admitted that she had engaged in criminal activity. She also implicated Spears in the scheme. . . . With Ms. Kirkland's cooperation, agents . . . taped three . . . calls [in which] Kirkland told Spears that she had . . . other probationers who wanted to be released from probation. . . . Spears said that the fee . . . would be $2,500 per probationer. . . .
[O]n July 16, an . . . agent posing as a probationer met with Spears and paid her $2,500. During the meeting, which was . . . recorded . . ., Spears accepted the . . . payment and agreed that the probationer would be released from probation because she was working with Ms. Kirkland. Spears [met[ with Ms. Kirkland and split . . . the $2,500 [with her].
On July 21, . . . , Spears met with another . . . agent posing as a probationer. This meeting was also . . . recorded . . . and Spears again . . . received $2,500 from the agent. After the payment was made, Spears met Ms. Kirkland under the overpass at Poydras and North Broad Streets and paid her $1,250. That money was recovered by the FBI and Spears was immediately arrested.
Sounds like a local bribery (and maybe some kind of obstruction of justice) case, doesn’t it? Well, it became a federal computer crime case. Why, you ask? The “why” really encompasses two issues: The first is why a FEDERAL case. The other is why a federal COMPUTER CRIME case, instead of a federal bribery case.
As to the first question, I can only speculate, but it’s pretty common in a case of criminal corruption in a local justice system for the crimes to be charged federally, instead of locally, for some practical reasons. It’s easier for federal authorities to prosecute local officials because they don’t work in the same system and haven’t been colleagues of the people who are being prosecuted. It can also enhance the appearance of fairness in the proceeding because the federal prosecutors who bring the case and the federal judge who handles it have no ties to the defendants -- no reason not to be perfectly impartial. That might have been a significant factor in this case because Spears “is the sister of lawyer and former Judge Ike Spears, and the sister-in-law of 1st City Court Judge Sonja Spears.” Gwen Filosa, Lawyer, Ex-Drug Court Worker Indicted, supra.
But what we really care about is the second question. There are federal bribery statutes, so charges might have worked here, but maybe not. Maybe neither Spears or Kirkland qualify as “public officials” under 18 U.S. Code § 201, which is the basic federal bribery statute. That could explain why this became a federal computer crimes case.
That is what it became: Spears was charged with computer fraud under 18 U.S. Code § 1030(a)(4) and with conspiracy to commit computer fraud under 18 U.S. Code § 371. (I’m not sure about Kirkland; I can’t find any reports of what she was charged with, if anything.)
Section 1030(a)(4) makes it a federal crime to “knowingly and with intent to defraud, access a . . . computer without authorization, or exceed authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value”. The prosecution’s theory was that Spears “actions constituted computer fraud because she affected the Orleans Parish Criminal District Court Docket Master Computer, where all entries involving a defendant's case are maintained.” In re Spears, supra. The computer “provides case numbers, defendants’ names, charges, court minutes and other key information related to” charges before this court. Gwen Filosa, Lawyer, Ex-Drug Court Worker Indicted, supra.
Where was the fraud, you ask? Well, the prosecution’s theory was that Spears and Kirkland put false information into the computer "`to release a person from probation for the personal gain of something of value.’" Gwen Filosa, Lawyer, Ex-Drug Court Worker Indicted, supra. The result is fraud, and since they used a computer, however tangentially, to carry out the fraud their actions constituted federal computer fraud, which is punishable by a fine and up to 5 years in prison.
The conspiracy charge under 18 U.S. Code § 371 is very simple: Section 371 makes it a crime to conspire to commit a federal crime. Conspiracy is, in essence, a criminal contract: The crime is committed when two or more people agree to the commission of a crime, a federal crime in this instance. Since what Spears and Kirkland did constituted federal computer crime (under the prosecution’s theory), and since they both agreed to commit the crime, we have conspiracy to commit computer fraud, which adds another 5 years in prison to the punishment Spears would get if convicted.
Not surprisingly, given all those recorded phone calls and meetings, Spears pled guilty. In a stroke of irony, she was sentenced to 3 years probation. She was put on home detention for 6 months, and required to wear an electronic monitoring device. Spears was also fined $10,000 and ordered to perform 200 hours of community service after she completed home detention. In re Spears, supra. She was permanently disbarred by the Louisiana Supreme Court, which means she’ll never practice law there (or probably anywhere else) again.
My focus, though, is not on the penalty she received (which actually doesn’t seem like that much), but on the charges. This is an instructive case: Note, first, how what doesn’t really SEEM like a federal computer crime case can become one; the incidental use of the court’s computer transformed what I see as a state bribery case into a computer crime case. That’s one of the interesting aspects of federal criminal law: It can have a very expansive reach.
Also note how useful conspiracy is. Perhaps you see why Learned Hand called it the prosecutor’s “darling:” If the commission of a crime involves 2 or more people, you pretty much have an automatic conspiracy charge, which ratchets up the penalty and has certain other consequences, such as letting a lot of people be tried together. If the evidence hadn’t been as strong and if Kirkland hadn’t cut a deal, the two might well have gone to trial . . . together, which might well have meant that they were pointing fingers at each other, trying to save themselves by dooming the other. That’s often an inevitable, perhaps irresistible, strategy, but it only works to the prosecution’s advantage. If the defendants are blaming each other, they both look bad.
On a side note, it’s really depressing and aggravating to see a lawyer do something like this.
Friday, August 22, 2008
Aggravation
You probably missed the story from Brunswick, Ohio that ran a couple of weeks ago: A hacker triggered the city’s 8 emergency warning sirens, so they blared out a false tornado warning. You can read about the hack, and watch a video story about it, via this link.
Many residents were confused when the sirens went off because the weather was apparently clear and calm. The police left messages on 14,000 phones, letting those residents know it was a false alarm.
Here’s what I find particularly interesting: The story said that until the hacker was caught or the system was changed, the sirens would remain shut down; they wouldn't be used to signal a tornado or other emergency.
Instead, the police would contact people who have signed up for the town’s telephonic warning system; if you’ve signed up, you’d get a call on your land line or cell phone telling you there’s a tornado or some other emergency. If you haven’t signed up (or don’t have a computer or access to a computer, since you sign up online) you wouldn't get a warning . . . until the hacker is caught or the system is fixed. (I'm assuming it's been fixed by now, but don't really know.)
I find this story interesting for several reasons. Let’s start with the charges that could be brought against the unknown hacker, first under Ohio law and then under federal law. Here’s what seems to be the applicable Ohio provision: “No person shall knowingly gain access to [or] attempt to gain access to . . . any computer system, or computer network without the consent of. . . the owner . . . or other person authorized to give consent by the owner.” Ohio Revised Code § 2913.04(B). If the perpetrator hasn’t been convicted of violating this provision before, the offense is a 4th degree felony; if the perpetrator has a prior conviction under this provision, it’s a 3d degree felony.
The news stories I can find on the Brunswick event don’t tell me much about how the unknown hacker gained access to the system. Let’s assume, though, that the facts show he/she did knowingly gain access to the Brunswick emergency computer system without consent. So we have a violation of this provision – we’ll assume our perpetrator doesn’t have any priors, so the crime is a 4th degree felony. I’ve skimmed the Ohio sentencing guidelines statute, and it looks to me like our hypothetical perpetrator could get off without doing jail time. Ohio Revised Code § 2929.13.
Under the sentencing guideline statute (It’s very complex; I don’t have the patience to parse it in detail, and I doubt you want me to do that here), a court is to impose a prison term on someone who committed a 4th degree felony if it finds that (i) certain aggravating factors apply and (ii) the offender is “not amenable to an available community control sanction”. The aggravating factors are as follows: (i) the crime caused physical harm to someone; (ii) the perpetrator threatened or tried to cause physical harm to someone with a deadly weapon or had a previous conviction for harming someone; (iii) he violated a position of public trust; (iv) he committed the crime as part of an organized criminal activity; (v) the crime is a serious sex offense; (vi) the perpetrator was in prison or had been in prison; (vii) he was on probation or out on bail; or (viii) he had a gun.
I don’t see how any of those apply.
The false alarms didn’t cause physical harm to anyone, nor does it seem that the perpetrator was trying to cause such harm. (Now, if he or she had disabled the sirens so they wouldn’t go off when a tornado came, that would qualify, especially if people were actually hurt.) Unless the perpetrator worked for the local police, I don’t see how a position of public trust or organized criminal activity was involved, and it’s not a sex crime. We doubt – but don’t know – if our guy was in prison, or had been in prison or was out on bail or on probation when he committed the crime, but we can pretty safely assume he did use a gun in committing it.
So there you go. If we have a first-time offender (my guess), the sentence would be probation. And that brings me back to an issue I’ve written about before: the balance between the “harm” a crime inflicts and the punishment imposed on the perpetrator. Is probation enough in this instance?
To figure that out, let’s consider the “harm” involved. As I noted above, this is not a scenario in which the perpetrator crippled the alarm systems, preventing people from learning that a tornado was heading toward them; that’s a serious “harm.” It’s basically another way of assaulting (maybe even killing) people. Here we have the reverse: an alarm when there was no need. The obvious “harm” is aggravation – people seem to have been unnerved at an alarm going off when the weather was clear and calm (and weather forecasters probably hadn’t said anything about possible tornados), and they also seem to have been justifiably aggravated when they found out it was a hoax.
So the “harm” inflicted was . . . aggravation . . . only? If it was just aggravation, then probation certainly makes sense. But was about the other “harm” – the “harm” to the warning system: As I noted above, police say they won’t use the sirens anymore until they catch their hacker (could take a while . . . happened in March and no one was caught) or they fix the system. I don’t know what fixing the system means; I assume it means hardening the system, and I don’t know how long that will take (or if it’s really possible).
Let’s assume it’s possible and it takes, say, two months. I checked two websites, and both told me that while the peak season for tornados in Ohio is April through July, they have happened later. So we have the additional “harm” of having the warning system taken offline during a period when tornados are not as likely as they were last month (I assume they know that) but are still possible. And, of course, there could be other emergencies . . . flood? . . . . terrorism? . . . asteroid? (I’m trying to be a little facetious because it really is unsettling to think of a town with no emergency system in place.)
The problem with these “harms” is that they’re not the kind of “harms” criminal law has ever taken cognizance of. As I’ve noted in various articles, criminal law is the oldest law. You see versions of criminal law in organized animal societies, like wolf packs. There are rules (don’t hurt others, don’t take their food, etc.) and there are punishments for those who break the rules. But neither wolves nor humans have really had to think about nebulous “harms” like the unavailability of a siren warning system before.
We see this in the federal statute that could apply here. Section 1030(a)(5)(A) of Title 18 of the U.S. Code makes it a federal crime to intentionally access a computer without being authorized to do so AND do one of the following: (i) cause “loss” of at least $5,000. (ii) alter or damage medical records or medical care; (iii) cause physical; (iv) cause a threat to public health or safety; or (v) damage a computer system a government entity uses in the administration of justice, national defense or national security. (If you want to read more about the statute, check out this Department of Justice manual.)
I don’t think the $5,000 loss option works because that option only applies when the perpetrator’s conduct inflicted economic loss. I don’t see any economic loss here. The best candidate is probably the “threat to public health or safety” option. We don’t have a threat to public health, as such, but we would seem to have a threat to public safety. The Department of Justice manual I noted above says this option applies when a computer intruder creates a threat by targeting an element of the country’s critical infrastructure. I’m assuming that would encompass this warning system.
If not, we could still try for the final option: a computer used in the administration of justice, national defense or national security. I don’t think the last two alternatives (national defense and national security apply); this part of the statute is apparently directed only at federally owned computers, like Department of Defense computers. Since the Brunswick system seems to be operated by the police, the “administration of justice” alternative should apply.
Okay, we’ve decided the unknown hacker could also be prosecuted under 18 U.S. Code § 1030. What kind of penalty would apply there? If the perpetrator only meant to hack the system, and didn’t realize that in so doing he/she was causing “harm” to people or to the administration of justice, the crime would be a misdemeanor if it was his/her first offense.
Misdemeanors have a maximum penalty of one year in prison, but this crime would probably result in probation, plus maybe community service. If this was a second or subsequent offense, then it would be a felony, and the person would be facing a 10-year prison sentence. If the person meant to cause the “harm” we identified above, then the crime would become a felony, even for a first offense, and would bring a sentence of 5 or 10 years in prison (5 if the offender recklessly caused the “harm,” 10 if he/she intended to cause the “harm”). The actual process of sentencing would a number of factors into account, including offender characteristics (e.g., age, priors) and issues surrounding the “harm” inflicted (e.g., the amount of risk to public safety, etc.).
The federal statute does a better job of actually encompassing the “harm” inflicted by a crime like this, but I’m still wondering how we should define and weigh the actual “harm” inflicted in this case, and others like it. I find a few news stories about cases in which a hacker attacked, or was preparing to attack a 911 system – the attack consisting of overwhelming the system so no one could use it. That, as I noted above, is not what this hacker did directly . . . but in a sense, his/her attack on the Brunswick emergency system had the same effect . . . since it was taken offline until. . . .
If you look at a case like this and just see the “harm” of aggravation (compounded, some quantum of aggravation for each of I don’t know how many victims), then I don’t see any reason why probation isn’t enough of a sanction. If you look at it and see additional, incremental “harm” of the type I’ve speculated about above, you then have to decide if you think that “harm” warrants more punishment . . . such as actual jail time.
It’s actually, I think, a rather difficult issue. The intertwining of the digital and tangible worlds means, IMHO, that we’re going to see the infliction of more intangible, nebulous “harms.” We will have to decide if we want our criminal law to encompass the infliction of those “harms” (which is the easier issue) and, if so, what kind of sanctions we think are appropriate for some one who . . . aggravates.
Many residents were confused when the sirens went off because the weather was apparently clear and calm. The police left messages on 14,000 phones, letting those residents know it was a false alarm.
Here’s what I find particularly interesting: The story said that until the hacker was caught or the system was changed, the sirens would remain shut down; they wouldn't be used to signal a tornado or other emergency.
Instead, the police would contact people who have signed up for the town’s telephonic warning system; if you’ve signed up, you’d get a call on your land line or cell phone telling you there’s a tornado or some other emergency. If you haven’t signed up (or don’t have a computer or access to a computer, since you sign up online) you wouldn't get a warning . . . until the hacker is caught or the system is fixed. (I'm assuming it's been fixed by now, but don't really know.)
I find this story interesting for several reasons. Let’s start with the charges that could be brought against the unknown hacker, first under Ohio law and then under federal law. Here’s what seems to be the applicable Ohio provision: “No person shall knowingly gain access to [or] attempt to gain access to . . . any computer system, or computer network without the consent of. . . the owner . . . or other person authorized to give consent by the owner.” Ohio Revised Code § 2913.04(B). If the perpetrator hasn’t been convicted of violating this provision before, the offense is a 4th degree felony; if the perpetrator has a prior conviction under this provision, it’s a 3d degree felony.
The news stories I can find on the Brunswick event don’t tell me much about how the unknown hacker gained access to the system. Let’s assume, though, that the facts show he/she did knowingly gain access to the Brunswick emergency computer system without consent. So we have a violation of this provision – we’ll assume our perpetrator doesn’t have any priors, so the crime is a 4th degree felony. I’ve skimmed the Ohio sentencing guidelines statute, and it looks to me like our hypothetical perpetrator could get off without doing jail time. Ohio Revised Code § 2929.13.
Under the sentencing guideline statute (It’s very complex; I don’t have the patience to parse it in detail, and I doubt you want me to do that here), a court is to impose a prison term on someone who committed a 4th degree felony if it finds that (i) certain aggravating factors apply and (ii) the offender is “not amenable to an available community control sanction”. The aggravating factors are as follows: (i) the crime caused physical harm to someone; (ii) the perpetrator threatened or tried to cause physical harm to someone with a deadly weapon or had a previous conviction for harming someone; (iii) he violated a position of public trust; (iv) he committed the crime as part of an organized criminal activity; (v) the crime is a serious sex offense; (vi) the perpetrator was in prison or had been in prison; (vii) he was on probation or out on bail; or (viii) he had a gun.
I don’t see how any of those apply.
The false alarms didn’t cause physical harm to anyone, nor does it seem that the perpetrator was trying to cause such harm. (Now, if he or she had disabled the sirens so they wouldn’t go off when a tornado came, that would qualify, especially if people were actually hurt.) Unless the perpetrator worked for the local police, I don’t see how a position of public trust or organized criminal activity was involved, and it’s not a sex crime. We doubt – but don’t know – if our guy was in prison, or had been in prison or was out on bail or on probation when he committed the crime, but we can pretty safely assume he did use a gun in committing it.
So there you go. If we have a first-time offender (my guess), the sentence would be probation. And that brings me back to an issue I’ve written about before: the balance between the “harm” a crime inflicts and the punishment imposed on the perpetrator. Is probation enough in this instance?
To figure that out, let’s consider the “harm” involved. As I noted above, this is not a scenario in which the perpetrator crippled the alarm systems, preventing people from learning that a tornado was heading toward them; that’s a serious “harm.” It’s basically another way of assaulting (maybe even killing) people. Here we have the reverse: an alarm when there was no need. The obvious “harm” is aggravation – people seem to have been unnerved at an alarm going off when the weather was clear and calm (and weather forecasters probably hadn’t said anything about possible tornados), and they also seem to have been justifiably aggravated when they found out it was a hoax.
So the “harm” inflicted was . . . aggravation . . . only? If it was just aggravation, then probation certainly makes sense. But was about the other “harm” – the “harm” to the warning system: As I noted above, police say they won’t use the sirens anymore until they catch their hacker (could take a while . . . happened in March and no one was caught) or they fix the system. I don’t know what fixing the system means; I assume it means hardening the system, and I don’t know how long that will take (or if it’s really possible).
Let’s assume it’s possible and it takes, say, two months. I checked two websites, and both told me that while the peak season for tornados in Ohio is April through July, they have happened later. So we have the additional “harm” of having the warning system taken offline during a period when tornados are not as likely as they were last month (I assume they know that) but are still possible. And, of course, there could be other emergencies . . . flood? . . . . terrorism? . . . asteroid? (I’m trying to be a little facetious because it really is unsettling to think of a town with no emergency system in place.)
The problem with these “harms” is that they’re not the kind of “harms” criminal law has ever taken cognizance of. As I’ve noted in various articles, criminal law is the oldest law. You see versions of criminal law in organized animal societies, like wolf packs. There are rules (don’t hurt others, don’t take their food, etc.) and there are punishments for those who break the rules. But neither wolves nor humans have really had to think about nebulous “harms” like the unavailability of a siren warning system before.
We see this in the federal statute that could apply here. Section 1030(a)(5)(A) of Title 18 of the U.S. Code makes it a federal crime to intentionally access a computer without being authorized to do so AND do one of the following: (i) cause “loss” of at least $5,000. (ii) alter or damage medical records or medical care; (iii) cause physical; (iv) cause a threat to public health or safety; or (v) damage a computer system a government entity uses in the administration of justice, national defense or national security. (If you want to read more about the statute, check out this Department of Justice manual.)
I don’t think the $5,000 loss option works because that option only applies when the perpetrator’s conduct inflicted economic loss. I don’t see any economic loss here. The best candidate is probably the “threat to public health or safety” option. We don’t have a threat to public health, as such, but we would seem to have a threat to public safety. The Department of Justice manual I noted above says this option applies when a computer intruder creates a threat by targeting an element of the country’s critical infrastructure. I’m assuming that would encompass this warning system.
If not, we could still try for the final option: a computer used in the administration of justice, national defense or national security. I don’t think the last two alternatives (national defense and national security apply); this part of the statute is apparently directed only at federally owned computers, like Department of Defense computers. Since the Brunswick system seems to be operated by the police, the “administration of justice” alternative should apply.
Okay, we’ve decided the unknown hacker could also be prosecuted under 18 U.S. Code § 1030. What kind of penalty would apply there? If the perpetrator only meant to hack the system, and didn’t realize that in so doing he/she was causing “harm” to people or to the administration of justice, the crime would be a misdemeanor if it was his/her first offense.
Misdemeanors have a maximum penalty of one year in prison, but this crime would probably result in probation, plus maybe community service. If this was a second or subsequent offense, then it would be a felony, and the person would be facing a 10-year prison sentence. If the person meant to cause the “harm” we identified above, then the crime would become a felony, even for a first offense, and would bring a sentence of 5 or 10 years in prison (5 if the offender recklessly caused the “harm,” 10 if he/she intended to cause the “harm”). The actual process of sentencing would a number of factors into account, including offender characteristics (e.g., age, priors) and issues surrounding the “harm” inflicted (e.g., the amount of risk to public safety, etc.).
The federal statute does a better job of actually encompassing the “harm” inflicted by a crime like this, but I’m still wondering how we should define and weigh the actual “harm” inflicted in this case, and others like it. I find a few news stories about cases in which a hacker attacked, or was preparing to attack a 911 system – the attack consisting of overwhelming the system so no one could use it. That, as I noted above, is not what this hacker did directly . . . but in a sense, his/her attack on the Brunswick emergency system had the same effect . . . since it was taken offline until. . . .
If you look at a case like this and just see the “harm” of aggravation (compounded, some quantum of aggravation for each of I don’t know how many victims), then I don’t see any reason why probation isn’t enough of a sanction. If you look at it and see additional, incremental “harm” of the type I’ve speculated about above, you then have to decide if you think that “harm” warrants more punishment . . . such as actual jail time.
It’s actually, I think, a rather difficult issue. The intertwining of the digital and tangible worlds means, IMHO, that we’re going to see the infliction of more intangible, nebulous “harms.” We will have to decide if we want our criminal law to encompass the infliction of those “harms” (which is the easier issue) and, if so, what kind of sanctions we think are appropriate for some one who . . . aggravates.
Wednesday, August 20, 2008
"Community Standards"
In the U.S. and elsewhere, it is a crime to publish “obscene” material.
Obscenity is actually a very recent crime, as crimes go. If you look back in history, especially ancient history, you find that people were not concerned about what we would call obscenity. They seemed to enjoy it, actually.
In the Anglo-American legal tradition, the criminalization of obscenity dates back to 1727, when the English Court of King’s Bench held that it was a crime to publish material that was “objectionable solely because of its offensive sexual content.” Dominus Rex v. Curl, 93 Eng. Rep. 849 (K.B. 1727).
Since obscenity didn’t become a crime in England until long after English colonists had settled America, there really wasn’t any interest in prosecuting obscenity in before the nineteenth century. There a couple of obscenity prosecutions in the first part of the nineteenth century, but it really became an issue in the 1870s when groups concerned “for society’s moral character” began lobbying to have obscene material suppressed.
The most successful of these groups was Anthony Comstock’s New York Society for the Suppression of Vice. In 1873, Comstock persuaded Congress to pass the Comstock Act, which criminalized the transportation and/or delivery of “obscene, lewd or lascivious” material. Comstock’s ideas as to what was obscene were pretty all-encompassing; as the Wikipedia entry on him notes, he was even able to ban anatomy textbooks from being sent to medical students via the mail. (He also apparently bragged about driving some people who had written what he considered obscene material to commit suicide . . . obviously a lovely man.)
What, you ask, does this have to do with cybercrime? Well, Anthony Comstock may be long gone, but we still have obscenity crimes. Earlier this year, two defendants charged with violating a federal obscenity statute moved to dismiss the charges, arguing that the unconstitutional when applied to the dissemination of allegedly obscene material online.
In 2007, the Department of Justice charged Max World Entertainment, Inc. and Paul Little a/k/a Max Hardcore with 5 counts of violating 18 U.S. Code § 1465 by using the Internet to sell and distribute obscene material. U.S. v. Little, 2008 WL 151875 (U.S. District Court for the Middle District of Florida 2008). The allegedly obscene material were video files; if you want to find out what kind of files they probably were, check out Max Hardcore’s entry on Wikipedia.
Section 1465 makes it a federal crime to use the Internet (or the mails or other means of transportation and communication) to distribute an “obscene. . . picture, film, . . . or other article”. (The Supreme Court has held that it is not a crime to possess obscenity; it is a crime to receive it, sell it, transport it or distribute it. U.S. v. Orito, 413 U.S. 139 (1973).
So, to violate this statute the video file at issue had to be “obscene.” The Max Hardcore defendants argued that the statute was unconstitutional when it was applied to what they had done. Specifically, they argued that federal obscenity statutes are “`unworkable when applied to the Internet.” U.S. v. Little, supra.
In order to convict someone of violating § 1465, the trier of fact (the jury in a jury trial or the judge in a bench trial, i.e., a trial without a jury) must find that the material at issue is obscene. To make that determination, the trier of fact must apply this test, which comes from the U.S. Supreme Court’s decision in Miller v. California, 413 U.S. 15 (1973):
More precisely, the defendants had two arguments and the government had its own counter-arguments:
It might do that, though. In Ashcroft v. American Civil Liberties Union, 535 U.S. 564 (2002), the Supreme Court’s opinion, which was written by Justice Thomas, held that the use of the Miller test did not by itself make a statute that prohibits disseminating obscenity to minors unconstitutional. But other Justices, writing separately, pointed out that the Internet is a very new medium of communication, one that make any reliance on the standards of a particular community problematic.
Personally, for whatever it’s worth, I agree with those Justices and I agree with the Max Hardcore defendants. Even if the Miller test made sense thirty-five years ago, when print and video materials had to be physically shipped into a particular community, it makes no sense at all now. As the Max Hardcore defendants and other defendants in similar cases have pointed out, it makes no sense to talk about community standards when you’re dealing with content that is posted on a website which is accessible from essentially anywhere on the globe.
You don’t have the element of intentionality you had when a purveyor of obscene materials knew what was being loaded on trucks and where the trucks were going (Tulsa, San Francisco, Milwaukee, etc.). The Internet reverses the dynamic you had with the physical transportation of obscenity into a particular community in a specific state. Instead of sending obscenity into a particular location, the operators of sites like the Max Hardcore site put the material online and their customers, in effect, come to them. They are not injecting obscenity into the local community (Tulsa or Dayton or Salem or Tallahassee); members of that community are seeking the material out and retrieving it on their own.
I’m personally not a fan of obscenity, but I really don’t care if others are. It seems to me that the logical thing is to get rid of the Miller test and come up with something else, if there IS a workable obscenity standard in the era of the Internet. Maybe we should just de-Comstock the federal criminal code (and state codes that have similar provisions) and go back to a world in which no one really cares about obscenity. I, for one, would much rather have law enforcement officers and prosecutors pursuing people who do “harm” to others -- like terrorists, murderers, rapists, thieves, fraudsters, etc. – that to spend their time on this kind of thing.
But, of course, I could be wrong.
Obscenity is actually a very recent crime, as crimes go. If you look back in history, especially ancient history, you find that people were not concerned about what we would call obscenity. They seemed to enjoy it, actually.
In the Anglo-American legal tradition, the criminalization of obscenity dates back to 1727, when the English Court of King’s Bench held that it was a crime to publish material that was “objectionable solely because of its offensive sexual content.” Dominus Rex v. Curl, 93 Eng. Rep. 849 (K.B. 1727).
Since obscenity didn’t become a crime in England until long after English colonists had settled America, there really wasn’t any interest in prosecuting obscenity in before the nineteenth century. There a couple of obscenity prosecutions in the first part of the nineteenth century, but it really became an issue in the 1870s when groups concerned “for society’s moral character” began lobbying to have obscene material suppressed.
The most successful of these groups was Anthony Comstock’s New York Society for the Suppression of Vice. In 1873, Comstock persuaded Congress to pass the Comstock Act, which criminalized the transportation and/or delivery of “obscene, lewd or lascivious” material. Comstock’s ideas as to what was obscene were pretty all-encompassing; as the Wikipedia entry on him notes, he was even able to ban anatomy textbooks from being sent to medical students via the mail. (He also apparently bragged about driving some people who had written what he considered obscene material to commit suicide . . . obviously a lovely man.)
What, you ask, does this have to do with cybercrime? Well, Anthony Comstock may be long gone, but we still have obscenity crimes. Earlier this year, two defendants charged with violating a federal obscenity statute moved to dismiss the charges, arguing that the unconstitutional when applied to the dissemination of allegedly obscene material online.
In 2007, the Department of Justice charged Max World Entertainment, Inc. and Paul Little a/k/a Max Hardcore with 5 counts of violating 18 U.S. Code § 1465 by using the Internet to sell and distribute obscene material. U.S. v. Little, 2008 WL 151875 (U.S. District Court for the Middle District of Florida 2008). The allegedly obscene material were video files; if you want to find out what kind of files they probably were, check out Max Hardcore’s entry on Wikipedia.
Section 1465 makes it a federal crime to use the Internet (or the mails or other means of transportation and communication) to distribute an “obscene. . . picture, film, . . . or other article”. (The Supreme Court has held that it is not a crime to possess obscenity; it is a crime to receive it, sell it, transport it or distribute it. U.S. v. Orito, 413 U.S. 139 (1973).
So, to violate this statute the video file at issue had to be “obscene.” The Max Hardcore defendants argued that the statute was unconstitutional when it was applied to what they had done. Specifically, they argued that federal obscenity statutes are “`unworkable when applied to the Internet.” U.S. v. Little, supra.
In order to convict someone of violating § 1465, the trier of fact (the jury in a jury trial or the judge in a bench trial, i.e., a trial without a jury) must find that the material at issue is obscene. To make that determination, the trier of fact must apply this test, which comes from the U.S. Supreme Court’s decision in Miller v. California, 413 U.S. 15 (1973):
(a) Whether ‘the average person, applying contemporary community standards' would find that the work taken as a whole, appeals to prurient interest; (b) whether the work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law; and (c) whether the work, taken as a whole, lacks serious literary, artistic, political or scientific value.The Max Hardcore defendants argued that this test cannot “be applied to the Internet because it is impossible to know what work `taken as a whole’ means when all web content is interconnected, and it is equally impossible to determine the community standards by which the material should be judged when the Internet reaches across the nation and across the world.” U.S. v. Little, supra.
More precisely, the defendants had two arguments and the government had its own counter-arguments:
The Defendants argue that in attempting to apply the Miller test to the facts of this case, the Court and the Jury must look at the Max Hardcore website as a whole, which includes numerous different web pages and interconnection to the World Wide Web, creating an impossible task for both the Judge and the Jury. The Government . . . argues that . . . the work to be considered under the Miller test are the five video files which were downloaded from the Max Hardcore web site. This Court agrees with the Government in that under the Miller test, the material to be considered, viewed and `taken as a whole’ . . . are the five video files that Defendants created and made available as down loadable files on their web site.
Similarly, Defendants argue that. . . . because the Max Hardcore website can be viewed from anywhere in the world, the community standards to be applied when determining whether the video files at issue are obscene would be the community standards of the world, which are impossible to ascertain. The Government argues that under the Miller test the community standards to be applied can be the standards of the community into which the allegedly obscene material moves or is sent, as well as the standards of the community from which it is sent. This Court again agrees with the Government.U.S. v. Little, supra. This Florida district judge essentially said she was going with the government because if the Supreme Court wants a different standard – a standard other than the community standards approach set out in Miller – to apply to Internet obscenity cases, it’s going to have to say so . . . which it has so far not done.
It might do that, though. In Ashcroft v. American Civil Liberties Union, 535 U.S. 564 (2002), the Supreme Court’s opinion, which was written by Justice Thomas, held that the use of the Miller test did not by itself make a statute that prohibits disseminating obscenity to minors unconstitutional. But other Justices, writing separately, pointed out that the Internet is a very new medium of communication, one that make any reliance on the standards of a particular community problematic.
Personally, for whatever it’s worth, I agree with those Justices and I agree with the Max Hardcore defendants. Even if the Miller test made sense thirty-five years ago, when print and video materials had to be physically shipped into a particular community, it makes no sense at all now. As the Max Hardcore defendants and other defendants in similar cases have pointed out, it makes no sense to talk about community standards when you’re dealing with content that is posted on a website which is accessible from essentially anywhere on the globe.
You don’t have the element of intentionality you had when a purveyor of obscene materials knew what was being loaded on trucks and where the trucks were going (Tulsa, San Francisco, Milwaukee, etc.). The Internet reverses the dynamic you had with the physical transportation of obscenity into a particular community in a specific state. Instead of sending obscenity into a particular location, the operators of sites like the Max Hardcore site put the material online and their customers, in effect, come to them. They are not injecting obscenity into the local community (Tulsa or Dayton or Salem or Tallahassee); members of that community are seeking the material out and retrieving it on their own.
I’m personally not a fan of obscenity, but I really don’t care if others are. It seems to me that the logical thing is to get rid of the Miller test and come up with something else, if there IS a workable obscenity standard in the era of the Internet. Maybe we should just de-Comstock the federal criminal code (and state codes that have similar provisions) and go back to a world in which no one really cares about obscenity. I, for one, would much rather have law enforcement officers and prosecutors pursuing people who do “harm” to others -- like terrorists, murderers, rapists, thieves, fraudsters, etc. – that to spend their time on this kind of thing.
But, of course, I could be wrong.
Monday, August 18, 2008
Weird Cyberstalking Case
I’m again indebted to Magistrate Marcia Linsky of the Allen County (Indiana) Superior Court – she sent me a link to a news story on what is a pretty weird cyberstalking case.
You can find the story in text and video here. One thing that’s interesting about it is that it identifies the victims, but not the perpetrator (maybe for his safety????).
What’s also interesting – and a little weird – is what he did that’s being prosecuted as stalking and harassment. It’s really not either, but those are probably the only charges that would fit.
Here’s what I can find about the facts: Mr. X (as we’ll call the perpetrator) is a 23-year-old man who worked at a church in Wabash, Indiana. (I don’t know in what capacity.) He apparently decided to assume the identities of two young women – one of them is 28 and the other is 16 – whose family attended the church. Mr. X created Facebook pages under their names and then pretended to be them, online. On the respective Facebook pages he created for each of them he posted photos of each girl, listed their addresses and phone numbers and described their after-school activities and work places in detail. This went on for two years.
Why did he do this, you ask? Well, if this was really stalking or harassment, he would have done this to torment the two young women. He would have done things like post false information about them (to wreck their reputations) or use the Facebook pages to do other things that would unnerve, even terrify them. As Wikipedia notes, stalking is “a form of mental assault, in which the perpetrator repeatedly, unwontedly and disruptively breaks into the life-world of the victim, with whom he has no relationship.” Stalkers and harassers focus their efforts on their victims, whether they carry out the activity in the real world or in the virtual world of cyberspace.
Mr. X did none of that. The victims didn’t even know about the fake Facebook pages until recently, after they’d been up for two years. So he wasn’t interested in them; he didn’t target them in any way, whether for a “mental assault” or threats or anything else we’re familiar with (or, more accurately, the criminal law is familiar with).
Nope. He was doing this for his very own benefit. He used the identities of these young women, and the Facebook pages he created in their names, to “have virtual sex with men around the world”, as the story cited above explains. The story also notes that the language used in these encounters was so graphic they couldn’t describe it on air or in print, but that’s standard for online sex, nothing new there.
(I don’t know what Mr. X’s agenda was, but I wonder if he knew about Second Life; as you may know, a lot of males use female avatars – to have sex, among other things – on Second Life. So it seems to me he could have accomplished pretty much the same thing by creating a female avatar in Second Life and gone trolling for sex partners.)
I don’t see what Mr. X did as stalking or harassment. I see this as a case of online imposture, something I’ve written about before. Mr. X simply used the identities of these two women to have a good time, in his own way. I’m sure he, like all imposters, had no desire for them to discover what he was doing, because it would then come to an end.
How did they discover it? Well, they didn’t. It seems the pastor of their church found the Facebook sites when he was “compiling an Internet list of his congregation” he could take with him when he left Wabash for a new post.
Let’s analyze what Mr. X did to see if it fits within Indiana’s definitions of harassment and/or stalking. Indiana Code § 35-45-2-2(a) defines harassment as follows:
The crime of harassment is, as I’ve noted before, similar to the crime of threatening someone: Both require that the perpetrator direct words and/or acts at an individual for the purpose of harassing, annoying or alarming them. As I noted in an earlier post, one that dealt with the Indiana Supreme Court’s throwing out harassment charges based on material posted on a MySpace page, it is an integral element of harassment that the perpetrator have sent harassing communications TO the victim, not posted something on line that he or she might see and might find harassing, annoying or even alarming.
That element is specifically included in Indiana’s definition of harassment. Indiana Code § 35-45-10-2 defines harassment as “conduct directed toward a victim that . . .would cause a reasonable person to suffer emotional distress and . . . causes the victim to suffer emotional distress.” If I were representing Mr. X, I’d be using that statutory definition and the Indiana Supreme Court case to argue that harassment is not a proper charge on these facts.
What about stalking? Indiana Code § 35-45-10-1 defines it as “a knowing or an intentional course of conduct involving repeated or continuing harassment of another . . . that would cause a reasonable person to feel terrorized, frightened, . . . or threatened and . . . causes the victim to feel terrorized, frightened, . . . or threatened.” Stalking is a Class D felony, so this could be the charge against Mr. X. (It moves up to a Class B or C felony if it involved threats of death or serious bodily injury or the violation of a protective order, none of which would apply here.)
Though the stalking statute doesn’t say it requires conduct that is directed toward the victim, that is an implicit element of all stalking laws. It’s an integral element of the whole notion of stalking; as I noted above, stalking is usually defined as a kind of “mental assault”, and you can’t assault someone physically or mentally unless you direct your efforts at them.
Mr. X didn’t do that. Mr. X did something weird and crummy, but I don’t think it’s stalking or harassment . . . and I suspect the local authorities who charged him know that. The news story – the only one I can find, so far – says the family is working with state and federal legislators to “draft some stricter laws” that would apply to this kind of conduct. The immediate motivation for this effort seems to be that if M. X were to be convicted of stalking or harassment, he would not have to register as a sex offender, which they seem to think is appropriate here. I’m not sure about that – he didn’t sexually attack anyone . . . he just took two women’s identities for a ride so he could have virtual sex.
I think it would make a lot more sense for the legislators to explore whether it’s possible, and necessary, to criminalize what really happened here: Imposture, the use of another person’s identity for the perpetrator’s own, selfish ends. As I’ve noted before, that can sometimes provide the basis for a defamation suit, which might well be true here. But that’s a civil remedy. I really wonder if we should consider coming up with some kind of new crime: a imposture crime.
Indiana, like most if not all states, makes it a crime to impersonate “a public servant,” such as a law enforcement officer. Some states have a criminal impersonation statute that makes it a crime to pretend to be someone else to commit fraud or some other kind of financial crime. I haven’t gone through the impersonation statutes of all 50 states, so I can’t say whether any of them really reach the kind of impersonation we have here. I do find Colorado’s criminal impersonation statute interesting. Here’s what it says:
The crime I’m postulating would not really be about using someone’s identity to DO something (like get married) or GET something (like money). It would in a sense be a true identity theft statute: We’ve never thought of our identities as property; the identity theft statutes we have now are about stealing our “personal identifying information” – our credit cards and SSNs and birthdates and things like that – and using them to commit fraud.
But what about our identity itself? Don’t we own our identity? Don’t I lose something if someone uses my identity, even for an innocuous purpose? Don’t I have the right to control what my identity does? If we decide the answer to those questions is yes, then we should probably go about creating a criminal imposture statute that would reach what Mr. X did.
You can find the story in text and video here. One thing that’s interesting about it is that it identifies the victims, but not the perpetrator (maybe for his safety????).
What’s also interesting – and a little weird – is what he did that’s being prosecuted as stalking and harassment. It’s really not either, but those are probably the only charges that would fit.
Here’s what I can find about the facts: Mr. X (as we’ll call the perpetrator) is a 23-year-old man who worked at a church in Wabash, Indiana. (I don’t know in what capacity.) He apparently decided to assume the identities of two young women – one of them is 28 and the other is 16 – whose family attended the church. Mr. X created Facebook pages under their names and then pretended to be them, online. On the respective Facebook pages he created for each of them he posted photos of each girl, listed their addresses and phone numbers and described their after-school activities and work places in detail. This went on for two years.
Why did he do this, you ask? Well, if this was really stalking or harassment, he would have done this to torment the two young women. He would have done things like post false information about them (to wreck their reputations) or use the Facebook pages to do other things that would unnerve, even terrify them. As Wikipedia notes, stalking is “a form of mental assault, in which the perpetrator repeatedly, unwontedly and disruptively breaks into the life-world of the victim, with whom he has no relationship.” Stalkers and harassers focus their efforts on their victims, whether they carry out the activity in the real world or in the virtual world of cyberspace.
Mr. X did none of that. The victims didn’t even know about the fake Facebook pages until recently, after they’d been up for two years. So he wasn’t interested in them; he didn’t target them in any way, whether for a “mental assault” or threats or anything else we’re familiar with (or, more accurately, the criminal law is familiar with).
Nope. He was doing this for his very own benefit. He used the identities of these young women, and the Facebook pages he created in their names, to “have virtual sex with men around the world”, as the story cited above explains. The story also notes that the language used in these encounters was so graphic they couldn’t describe it on air or in print, but that’s standard for online sex, nothing new there.
(I don’t know what Mr. X’s agenda was, but I wonder if he knew about Second Life; as you may know, a lot of males use female avatars – to have sex, among other things – on Second Life. So it seems to me he could have accomplished pretty much the same thing by creating a female avatar in Second Life and gone trolling for sex partners.)
I don’t see what Mr. X did as stalking or harassment. I see this as a case of online imposture, something I’ve written about before. Mr. X simply used the identities of these two women to have a good time, in his own way. I’m sure he, like all imposters, had no desire for them to discover what he was doing, because it would then come to an end.
How did they discover it? Well, they didn’t. It seems the pastor of their church found the Facebook sites when he was “compiling an Internet list of his congregation” he could take with him when he left Wabash for a new post.
Let’s analyze what Mr. X did to see if it fits within Indiana’s definitions of harassment and/or stalking. Indiana Code § 35-45-2-2(a) defines harassment as follows:
A person who, with intent to harass, annoy, or alarm another person but with no intent of legitimate communication:I assume this is the statute Mr. X has been charged under because the news story says he’s charged with misdemeanor harassment. I’m not sure this statue applies: He didn’t make a phone call (at least that’s not at issue in the charge); he didn’t send any written communications to either of the victims; he didn’t’ send anything on Citizen’s Radio. He did use a computer to communicate with a person and/or transmit an obscene message or indecent words to a person, but he didn’t do any of that to the two women who are his ostensible victims.
(1) makes a telephone call, whether or not a conversation ensues;
(2) communicates with a person by . . . mail, or other . . . written communication;
(3) transmits an obscene message . . . on a Citizens Radio Service channel; or
(4) uses a computer network . . . to: (A) communicate with a person; or (B) transmit an obscene message or indecent or profane words to a person;
commits harassment, a Class B misdemeanor.
The crime of harassment is, as I’ve noted before, similar to the crime of threatening someone: Both require that the perpetrator direct words and/or acts at an individual for the purpose of harassing, annoying or alarming them. As I noted in an earlier post, one that dealt with the Indiana Supreme Court’s throwing out harassment charges based on material posted on a MySpace page, it is an integral element of harassment that the perpetrator have sent harassing communications TO the victim, not posted something on line that he or she might see and might find harassing, annoying or even alarming.
That element is specifically included in Indiana’s definition of harassment. Indiana Code § 35-45-10-2 defines harassment as “conduct directed toward a victim that . . .would cause a reasonable person to suffer emotional distress and . . . causes the victim to suffer emotional distress.” If I were representing Mr. X, I’d be using that statutory definition and the Indiana Supreme Court case to argue that harassment is not a proper charge on these facts.
What about stalking? Indiana Code § 35-45-10-1 defines it as “a knowing or an intentional course of conduct involving repeated or continuing harassment of another . . . that would cause a reasonable person to feel terrorized, frightened, . . . or threatened and . . . causes the victim to feel terrorized, frightened, . . . or threatened.” Stalking is a Class D felony, so this could be the charge against Mr. X. (It moves up to a Class B or C felony if it involved threats of death or serious bodily injury or the violation of a protective order, none of which would apply here.)
Though the stalking statute doesn’t say it requires conduct that is directed toward the victim, that is an implicit element of all stalking laws. It’s an integral element of the whole notion of stalking; as I noted above, stalking is usually defined as a kind of “mental assault”, and you can’t assault someone physically or mentally unless you direct your efforts at them.
Mr. X didn’t do that. Mr. X did something weird and crummy, but I don’t think it’s stalking or harassment . . . and I suspect the local authorities who charged him know that. The news story – the only one I can find, so far – says the family is working with state and federal legislators to “draft some stricter laws” that would apply to this kind of conduct. The immediate motivation for this effort seems to be that if M. X were to be convicted of stalking or harassment, he would not have to register as a sex offender, which they seem to think is appropriate here. I’m not sure about that – he didn’t sexually attack anyone . . . he just took two women’s identities for a ride so he could have virtual sex.
I think it would make a lot more sense for the legislators to explore whether it’s possible, and necessary, to criminalize what really happened here: Imposture, the use of another person’s identity for the perpetrator’s own, selfish ends. As I’ve noted before, that can sometimes provide the basis for a defamation suit, which might well be true here. But that’s a civil remedy. I really wonder if we should consider coming up with some kind of new crime: a imposture crime.
Indiana, like most if not all states, makes it a crime to impersonate “a public servant,” such as a law enforcement officer. Some states have a criminal impersonation statute that makes it a crime to pretend to be someone else to commit fraud or some other kind of financial crime. I haven’t gone through the impersonation statutes of all 50 states, so I can’t say whether any of them really reach the kind of impersonation we have here. I do find Colorado’s criminal impersonation statute interesting. Here’s what it says:
A person commits criminal impersonation if he knowingly assumes a false or fictitious identity or capacity, and in such identity or capacity he:Colorado Revised Statutes § 18-5-113(a). That last one might apply, if we can construe “benefit” as not just a financial benefit (which I’m sure is what it is meant to be).
(a) Marries, or pretends to marry. . . or
(b) Becomes bail or surety for a party in an action . . ., civil or criminal . . . ;
(c) Confesses a judgment . . . ; or
(d) Does an act which if done by the person falsely impersonated, might subject such person to . ., civil or criminal . . . liability . . . ; or
(e) Does any other act with intent to unlawfully gain a benefit for himself. . . .
The crime I’m postulating would not really be about using someone’s identity to DO something (like get married) or GET something (like money). It would in a sense be a true identity theft statute: We’ve never thought of our identities as property; the identity theft statutes we have now are about stealing our “personal identifying information” – our credit cards and SSNs and birthdates and things like that – and using them to commit fraud.
But what about our identity itself? Don’t we own our identity? Don’t I lose something if someone uses my identity, even for an innocuous purpose? Don’t I have the right to control what my identity does? If we decide the answer to those questions is yes, then we should probably go about creating a criminal imposture statute that would reach what Mr. X did.
Friday, August 15, 2008
Hacking a Heart
You may have seen the recent news stories about how some researchers have figured out how to hack a heart.
Researchers at the University of Massachusetts have figured out how to turn off a pacemaker remotely, using wireless communications. There were a number of news stories about their research last spring, but it got a lot of press recently because they did a DefCon presentation on the heart hack.
Computers use wireless signals to program a pacemaker so it can deal with the patient’s unique needs. Not surprisingly, pacemakers’ wireless signaling capacity isn’t protected with passwords or encryption or any other kind of security . . . probably because no one ever though it would be needed. Pacemakers were invented long before we had the Internet or wireless networking, and probably haven’t been modified to take either of those into account.
The stories all say that this isn’t something people with pacemakers need to be worried about, at least not for the moment. From what I’ve read, the researchers used very expensive technology, which puts the hack outside the reach of most people. And from other things I’ve read, it seems you have to be very close to the pacemaker to be able to send a signal that will shut it down.
I suspect both of those conditions are transient. As we all know, technology evolves very rapidly, and now that the “how” of this has been demonstrated, it will probably not be long before someone figures out to make it a realistic way to commit murder.
Would it be a cybercrime if, say, Joan used an evolved version of this hack to shut down the pacemaker of her not-so-dearly-beloved-but-very-very-rich uncle Ferd? It would be murder, and it would be committed by using computer technology, so, sure, it would be a cybercrime. (As I’ve noted elsewhere, I define cybercrime as using computer technology to commit a crime, can be a computer-specific crime like spreading a virus or it can be a regular, garden-variety crime like theft.)
Would there be any difficulty in prosecuting Joan, assuming the prosecution could prove what she did and prove that what she did caused Ferd’s death? I can’t see why there would. As I’ve noted elsewhere, criminal law traditionally has worried about the result – the “harm” – instead of the method. So we outlaw homicide; we don’t separately outlaw homicide by shooting, homicide by stabbing, homicide by poisoning, homicide by strangulation, and so on.
(A number of states do have vehicular homicide statutes, but I think those are artifacts dating back to when cars were new . . .the produce of a feeling that you needed to make it really, really clear that if you ran someone down with a car and killed them, that was homicide. I don’t think any jurisdiction ever felt it necessary to adopt homicide-by-wagon statutes, probably because it would be pretty hard to kill someone with a wagon, unless you caught them off guard.)
Getting back to my point, the substantive law – the law that defines criminal offenses – would not be a problem here: Murder is purposely causing the death of another human being. Joan, in our hypothetical, used the pacemaker for the purpose of killing Ferd, and succeeded. So, that’s murder.
I’m not even sure proving the crime would be that difficult. Not being a technical expert, I can’t opine on how easy it would be for Joan to hide her tracks, but for the moment I’m going to assume that it would be possible to prove Ferd died from a hack, not from natural causes. (I’m also willing to bet that pacemakers are going to become more sophisticated, hopefully more impervious to this kind of hack. Along with that, they might include some feature that could track such a hack, just in case the pacemaker was not able to resist a particular attack.)
What I find interesting about this is the possibility it creates of getting away with murder because no one realizes there has BEEN a murder. There have been anecdotal tales about attempts to hack hospital computers, apparently for the purpose of causing the death of someone or more than one someone’s. There have been stories about efforts to alter the dosage of particular medications, for example, the premise being that the hacker would increase the dosage of a medication so that it would cause the person’s death relatively quickly.
If such a hack were possible, and if hospital personnel didn’t (as I assume they would) notice that the dosage of that particular medication was out of whack for a patient, then it would be a clever way to commit murder. It would probably be even more clever if the killer upped the dosage of the medication for a number of people. That way, it could look even more like medical negligence than murder. And even if someone figured out that it was murder, it would then be necessary to figure out which of the patients was the real target, with the others, sadly, simply serving as the killer’s smokescreens.
I hope the pacemaker hack can somehow be resolved before it becomes possible really to do this to someone else. I hope that because I suspect that if someone were to do this, they’d stand a really good chance of getting away with murder. The death might be attributed simply to the patient’s own fragile condition; or it might be attributed to a faulty pacemaker.
The pacemaker hack illustrates the unanticipated perils we will have to confront as technology becomes an increasingly pervasive aspect of our lives. Pacemakers are a well-established, routine type of implant. Many forecast that in the future we will have other kinds of implants . . . implants designed to make our lives easier by, say, letting us use our brains to access information or communicate wirelessly with each other. Other implants might somehow boost our alertness or intelligence. Those implants, like the pacemakers that have been around for decades, can become a vulnerability, a way to attack someone in new and different ways.
The so far unrealized pacemaker hack as murder also illustrates another aspect of cybercrime. People have been saying for a long time that the best cybercrime is the one no on realizes has been committed.
Researchers at the University of Massachusetts have figured out how to turn off a pacemaker remotely, using wireless communications. There were a number of news stories about their research last spring, but it got a lot of press recently because they did a DefCon presentation on the heart hack.
Computers use wireless signals to program a pacemaker so it can deal with the patient’s unique needs. Not surprisingly, pacemakers’ wireless signaling capacity isn’t protected with passwords or encryption or any other kind of security . . . probably because no one ever though it would be needed. Pacemakers were invented long before we had the Internet or wireless networking, and probably haven’t been modified to take either of those into account.
The stories all say that this isn’t something people with pacemakers need to be worried about, at least not for the moment. From what I’ve read, the researchers used very expensive technology, which puts the hack outside the reach of most people. And from other things I’ve read, it seems you have to be very close to the pacemaker to be able to send a signal that will shut it down.
I suspect both of those conditions are transient. As we all know, technology evolves very rapidly, and now that the “how” of this has been demonstrated, it will probably not be long before someone figures out to make it a realistic way to commit murder.
Would it be a cybercrime if, say, Joan used an evolved version of this hack to shut down the pacemaker of her not-so-dearly-beloved-but-very-very-rich uncle Ferd? It would be murder, and it would be committed by using computer technology, so, sure, it would be a cybercrime. (As I’ve noted elsewhere, I define cybercrime as using computer technology to commit a crime, can be a computer-specific crime like spreading a virus or it can be a regular, garden-variety crime like theft.)
Would there be any difficulty in prosecuting Joan, assuming the prosecution could prove what she did and prove that what she did caused Ferd’s death? I can’t see why there would. As I’ve noted elsewhere, criminal law traditionally has worried about the result – the “harm” – instead of the method. So we outlaw homicide; we don’t separately outlaw homicide by shooting, homicide by stabbing, homicide by poisoning, homicide by strangulation, and so on.
(A number of states do have vehicular homicide statutes, but I think those are artifacts dating back to when cars were new . . .the produce of a feeling that you needed to make it really, really clear that if you ran someone down with a car and killed them, that was homicide. I don’t think any jurisdiction ever felt it necessary to adopt homicide-by-wagon statutes, probably because it would be pretty hard to kill someone with a wagon, unless you caught them off guard.)
Getting back to my point, the substantive law – the law that defines criminal offenses – would not be a problem here: Murder is purposely causing the death of another human being. Joan, in our hypothetical, used the pacemaker for the purpose of killing Ferd, and succeeded. So, that’s murder.
I’m not even sure proving the crime would be that difficult. Not being a technical expert, I can’t opine on how easy it would be for Joan to hide her tracks, but for the moment I’m going to assume that it would be possible to prove Ferd died from a hack, not from natural causes. (I’m also willing to bet that pacemakers are going to become more sophisticated, hopefully more impervious to this kind of hack. Along with that, they might include some feature that could track such a hack, just in case the pacemaker was not able to resist a particular attack.)
What I find interesting about this is the possibility it creates of getting away with murder because no one realizes there has BEEN a murder. There have been anecdotal tales about attempts to hack hospital computers, apparently for the purpose of causing the death of someone or more than one someone’s. There have been stories about efforts to alter the dosage of particular medications, for example, the premise being that the hacker would increase the dosage of a medication so that it would cause the person’s death relatively quickly.
If such a hack were possible, and if hospital personnel didn’t (as I assume they would) notice that the dosage of that particular medication was out of whack for a patient, then it would be a clever way to commit murder. It would probably be even more clever if the killer upped the dosage of the medication for a number of people. That way, it could look even more like medical negligence than murder. And even if someone figured out that it was murder, it would then be necessary to figure out which of the patients was the real target, with the others, sadly, simply serving as the killer’s smokescreens.
I hope the pacemaker hack can somehow be resolved before it becomes possible really to do this to someone else. I hope that because I suspect that if someone were to do this, they’d stand a really good chance of getting away with murder. The death might be attributed simply to the patient’s own fragile condition; or it might be attributed to a faulty pacemaker.
The pacemaker hack illustrates the unanticipated perils we will have to confront as technology becomes an increasingly pervasive aspect of our lives. Pacemakers are a well-established, routine type of implant. Many forecast that in the future we will have other kinds of implants . . . implants designed to make our lives easier by, say, letting us use our brains to access information or communicate wirelessly with each other. Other implants might somehow boost our alertness or intelligence. Those implants, like the pacemakers that have been around for decades, can become a vulnerability, a way to attack someone in new and different ways.
The so far unrealized pacemaker hack as murder also illustrates another aspect of cybercrime. People have been saying for a long time that the best cybercrime is the one no on realizes has been committed.
Wednesday, August 13, 2008
An Honest and Stupid Mistake
A case from Texas illustrates how people can get caught up in what is known as a Nigerian or a 419 (after a provision in the Nigerian criminal code) scam.
The case is Tran v. State, 2007 WL 2050305 (Texas Court of Appeals 2007), and here are the facts:
Tran was charged with theft, a felony, and was convicted by a jury. The trial court sentenced him to serve 25 years in prison. Not surprisingly, he appealed, which is what this opinion is about.
Tran claimed the evidence presented to the jury was not sufficient to justify their finding him guilty. In considering this claim, the Court of Appeals noted that in reviewing the "legal sufficiency of the evidence, we do not ask whether we believe that the evidence at trial established guilt beyond a reasonable doubt. . . . Rather, we examine the evidence in the light most favorable to the verdict to determine whether any rational trier of fact could have found the essential elements of the offense beyond a reasonable doubt.” Tran v. State, supra.
The court then explained what the jury had to find to convict Tran of theft:
In deciding whether a rational jury could have convicted Tran, the court noted that “the crucial issue is intent.” There was no dispute about what the DID (the actus reus of the crime); the only dispute was whether he acted with the necessary intent, the mens rea of the crime. The Court of Appeals found that there was evidence from which a rational jury could have found, beyond a reasonable doubt, that Tran intentionally committed theft:
I don’t know how often people get prosecuted for participating in these scams, but I find several reported cases in which people were prosecuted for doing essentially what Tran did. And, like Tran, they argued that they were innocent dupes. In those cases, too, the government was able to establish the necessary intent with evidence from the person’s computer, as we have here, and/or by using inferences from the way they conducted themselves. (Referring to why your former attorney and his “clowns” were arrested in connection with a similar incident is a pretty good way to help the government out, here.)
We hear a lot about Nigerian scams, but I don’t think we hear much about the people in the U.S. who become accomplices to those perpetrating these scams. Even though I tend not to have a great deal of sympathy for those who fall for the scams (you are, after all, being asked to break the law to some extent, which has always been the “hook” in this kind of con), it’s still a pretty rotten thing to do.
The case is Tran v. State, 2007 WL 2050305 (Texas Court of Appeals 2007), and here are the facts:
On October 29, 2004, Tran, an engineer for a NASA contractor, opened an account in his son's name at the Johnson Space Center (`JSC’) Credit Union. On the same day, he endorsed and deposited a check made payable to him from Marc USA/Pittsburgh Inc. (`Marc USA’) in the amount of $128,486. In early November, Tran made four cash withdrawals totaling $20,500. On November 8, 2004, Tran executed wire transfers of $45,000 to a bank in Japan and $30,000 to a bank in the Netherlands. Tran subsequently made another cash withdrawal and another wire transfer to the Netherlands. On November 15, 2004, Tran endorsed and deposited a second check from Marc USA in the amount of $193,758. Tran subsequently made several cash withdrawals and wire transfers to the banks in Japan and the Netherlands. On December 3, 2004, Tran endorsed and deposited a third check from Marc USA in the amount of $197,337. Three days later he wire transferred $20,000 to Barclays Bank in London. He also withdrew cash and wired $168,000 to the bank in Japan.Tran v. State, supra.
On December 17, 2004, Tran endorsed and deposited a check from Sundance Square Management Corp. (`Sundance Square’) for $237,653.33. On December 23, 2004, Tran withdrew $5000 in cash and wired $10,000 to Barclays Bank in London. After making this final wire transfer, Tran began calling the JSC Credit Union and requesting that his wire transfers be rescinded. An employee of the credit union [said] that during this time period, Tran phoned her ten to twenty times per day requesting that the bank reclaim the transferred funds. On December 24, 2004, the Sundance Square check was returned to the JSC Credit Union because the issuing bank determined it was counterfeit. Debra Reeder, vice-president of accounting at the credit union then reviewed the first three large checks that Tran had deposited and determined that they were also counterfeit. Marc USA did not issue the checks and did not consent to Tran negotiating or possessing the checks.
Secret Service Agent Steve Dudek conducted an investigation into the . . .checks. Dudek interviewed Tran, and . . .Tran made a statement. . . . In his statement, Tran claimed to have lost almost $200,000 as the victim of a `Nigerian 419 scam’ during 2002 and 2003. Tran said he received an email in 2004 from someone who claimed he could help Tran recoup his losses from the previous scam. Several emails, many from an alleged Nigerian official, Idiata Aigbedion, and an alleged attorney, Francis Ehimen, ensued. According to Tran, at the direction of Aigbedion and Ehimen, he agreed to open a separate account at his credit union. Tran indicated he only agreed to participate if he did not have to contribute any of his own money. Tran admitted that he deposited the four checks and made wire transfers to various banks around the world, something he had agreed to do in exchange for a fee. He also made several cash withdrawals. Tran stated he did not know that the first three checks were counterfeit until he was arrested. When the fourth check did not clear, Tran said, `the bank caught it this time.’ According to Tran, it was only then that he realized he `had made an honest and stupid mistake.’
Tran was charged with theft, a felony, and was convicted by a jury. The trial court sentenced him to serve 25 years in prison. Not surprisingly, he appealed, which is what this opinion is about.
Tran claimed the evidence presented to the jury was not sufficient to justify their finding him guilty. In considering this claim, the Court of Appeals noted that in reviewing the "legal sufficiency of the evidence, we do not ask whether we believe that the evidence at trial established guilt beyond a reasonable doubt. . . . Rather, we examine the evidence in the light most favorable to the verdict to determine whether any rational trier of fact could have found the essential elements of the offense beyond a reasonable doubt.” Tran v. State, supra.
The court then explained what the jury had to find to convict Tran of theft:
A person commits theft if he unlawfully appropriates property with intent to deprive the owner of it without the owner's effective consent. . . .The jury charge . . . included an instruction on the law of parties. A person is criminally responsible as a party to an offense if the offense is committed by the conduct of another if, acting with intent to promote or assist the commission of the offense, he solicits, encourages, directs, aids, or attempts to aid the other person to commit the offense.Tran v. State, supra.
[T]he jury was authorized to convict Tran as a principle if it found that Tran, pursuant to one scheme and continuing course of conduct, appropriated by acquiring and otherwise exercising control over money owned by Debra Reeder and the JSC Credit Union, with the intent to deprive the complainants of the property. The jury was also authorized to convict him as a party if it found beyond a reasonable doubt that Idiata Aigbedion or Francis Ehimen unlawfully, pursuant to one scheme and continuing course of conduct, appropriated by acquiring or otherwise exercising control over money owned by Debra Reeder and the JSC Credit Union with the intent to deprive the complainants of the property, and that Tran, with the intent to promote or assist the commission of the offense, solicited, encouraged, directed, aided or attempted to aid Idiata Aigbedion or Francis Ehimen to commit the offense.
In deciding whether a rational jury could have convicted Tran, the court noted that “the crucial issue is intent.” There was no dispute about what the DID (the actus reus of the crime); the only dispute was whether he acted with the necessary intent, the mens rea of the crime. The Court of Appeals found that there was evidence from which a rational jury could have found, beyond a reasonable doubt, that Tran intentionally committed theft:
Secret Service agents reviewed email correspondence and other documents found on the hard drive of Tran's home computer. Tran corresponded via email with Idiata Aigbedion and Francis Ehimen. Over the course of the email correspondence, Tran received wire transfer instructions. On October 8, 2004, prior to receiving the first check, Tran sent an email to Ehimen stating, `Let's hope this check is for real.’ In a subsequent email correspondence with Aigbedion Tran stated:Tran v. State, supra.
Since the Sept. 11, 2001 terrorist attacked [sic] on the U.S. Soil, the U.S. Justice department had a new Law, the Patriot Act, which was created by the U.S. Congress, It has the legal right to monitor all money transactions in and out of the country, especially from those countries that are listed under Terrorist is [sic] watch. Unfortunately, Nigeria is on that list.
When they see a large sum of cash send [sic] in to Nigeria with no clear business or personal reasons, they can stop it . . . and give it back to the sender. They are not confiscating it, because they are suspicious of the transaction only, if they have considered as some sort of illegal activity, then they will take away the money and conducting a formal investigations, this is how my previous attorney and his `clowns’ were arrested, last time.
Based on these emails, the jury could have inferred that Tran knew the checks were counterfeit and thus that Tran was not the victim of a scam. Tran expressed concern that the checks were not “real” and discussed with Aigbedon how to avoid regulations intended to discover suspicious transactions.
Tran made inconsistent statements to credit union employees about the nature of the account he opened in his son's name. He initially told the credit union that the account was for his son's college fund, but when Reeder asked where the large checks originated, Tran told her he was attempting to start a business in another country. The email correspondence also revealed that Tran conducted the transactions under the guise of an international business. Further, Tran's actions prior to the deposit of the last check indicated he was not an innocent victim of a scam. Tran phoned the credit union ten to twenty times per day attempting to reclaim the money from the wire transfers. The jury could have inferred from this evidence that Tran knew the checks were counterfeit and had reason to believe that the credit union would discover the theft.
I don’t know how often people get prosecuted for participating in these scams, but I find several reported cases in which people were prosecuted for doing essentially what Tran did. And, like Tran, they argued that they were innocent dupes. In those cases, too, the government was able to establish the necessary intent with evidence from the person’s computer, as we have here, and/or by using inferences from the way they conducted themselves. (Referring to why your former attorney and his “clowns” were arrested in connection with a similar incident is a pretty good way to help the government out, here.)
We hear a lot about Nigerian scams, but I don’t think we hear much about the people in the U.S. who become accomplices to those perpetrating these scams. Even though I tend not to have a great deal of sympathy for those who fall for the scams (you are, after all, being asked to break the law to some extent, which has always been the “hook” in this kind of con), it’s still a pretty rotten thing to do.
Monday, August 11, 2008
Loophole
Like a fire escape, a loophole lets us escape from something . . . legal liability, a job assignment we don’t want, etc.
In a recent case from the U.S. Court of Appeals for the Federal Circuit, the U.S. government exploited a kind of loophole to avoid being held liable for copyright infringement and for violating the Digital Millennium Copyright Act (the DMCA).
The case is Blueport Co. v. U.S., 2008 WL 2854127 (Fed. Cir. 2008). You can find the opinion on the Federal Circuit’s website: here. Look for opinion number 07-5140.pdf.
Here are the facts in the case:
Blueport sued the government, claiming the Air Force (i) infringed its copyright in the AUMD program and (iii) violated the DMCA by extending the expiration date in the AUMD program's object code, thus circumventing measures took Blueport to prevent its unauthorized use. The government moved to dismiss the case for lack of jurisdiction, and the Court of Federal Claims (the CFC is the court you go to if you’re suing the federal government) granted the motion. Blueport appealed to the Federal Circuit Court of Appeals, which is what this opinion is about.
Jurisdiction, as I’ve noted before, is a court’s power to hear and decide a particular case. Courts are usually assumed to have power to decide a particular case, and will do so unless there is some reason why a particular defendant is immune from suit. In the U.S., for example, law enforcement officers have a qualified immunity from suit in cases claiming, say, excessive use of force; that means they are presumed to be immune, but the plaintiff can eliminate that immunity by showing there’s a reason it shouldn’t apply in this particular case. What the U.S. government did in this case is similar: It raised the issue of sovereign immunity from suit.
Sovereign immunity is a neat little principle that derives from English common law. It says sovereigns – the U.S. government, a state government, a county, local or city government – cannot be sued unless they agree. It apparently dates back to the time when the king “could no wrong,” literally, because the king (the sovereign) basically owned everything and everyone and was, after all, the one who made the laws. If you make the laws, you pretty much don’t have to let yourself be sued for breaking them.
In this case, the U.S. government didn’t deny (or admit) any of the claims Blueport made; it simply said it could not be sued because it hadn’t waived its sovereign immunity from suit. The statute at issue was 18 U.S. Code § 1498(b) -- the statute in which the U.S. government waives some of its sovereign immunity in copyright cases. In § 1498(b), the federal government agrees to be sued for copyright infringement and/or under the DMCA EXCEPT when certain circumstances exist. Here are the two provisions that were at issue in the Blueport case:
[A] Government employee shall have a right of action against the Government under this subsection except where he was in a position to order, influence, or induce use of the copyrighted work by the Government:
[T]his subsection shall not confer a right of action on any copyright owner . . . with respect to any copyrighted work prepared by a person while in the employment or service of the United States, where the copyrighted work was prepared as a part of the official functions of the employee, or in the preparation of which Government time, material, or facilities were used.
18 U.S. Code § 1498(b).
Can you see where this is going? The government argued that it had not waived sovereign immunity in this case because (i) Davenport was in a position to “order influence, or induce use of” the AUMD program by the government; and/or (ii) he prepared it as part of his official duties and/or used government time, material and facilities to do so. Blueport Co. v. U.S. supra.
Like the lower court (the CFC), the Federal Circuit Court of Appeals held that the first option applied, which meant the government had not waived its sovereign immunity in this case (and it didn’t need to consider the second option).
It’s good to be the king.
In a recent case from the U.S. Court of Appeals for the Federal Circuit, the U.S. government exploited a kind of loophole to avoid being held liable for copyright infringement and for violating the Digital Millennium Copyright Act (the DMCA).
The case is Blueport Co. v. U.S., 2008 WL 2854127 (Fed. Cir. 2008). You can find the opinion on the Federal Circuit’s website: here. Look for opinion number 07-5140.pdf.
Here are the facts in the case:
Blueport claims that the Government . . . infringed Blueport's copyright on . . . `the AUMD program.’ The AUMD program was written by Air Force Technical Sergeant Mark Davenport. On March 6, 2000, Davenport assigned all his rights in the AUMD program to Blueport.Blueport Co. v. U.S. supra.
When Davenport wrote the AUMD program, he was employed as a manager of the Air Force Manpower Data System (`MDS’), a database containing manpower profiles for each unit in the Air Force. In his capacity as an MDS Manager, Davenport updated the MDS with new data and provided reports . . . to Air Force personnel. . . . Davenport was also a member of the Air Force's Manpower User Group, . . . who provided guidance on the use of the MDS. Based on his experience with the MDS, Davenport concluded that the software the Air Force used . . . was inefficient and began seeking ways to redesign the software. . . . [He] learned the . . . programming . . . necessary to write the AUMD program on his own time and with his own resources. Davenport then wrote the source code for the AUMD program while at home on his personal computer. Although he wrote the program solely at his home and at his own initiative, Davenport's intent in writing the program was that other Air Force manpower personnel would use it.
Davenport shared an early version . . . with a . . . coworker, and both tested the program on the MDS at work during regular business hours. Based on the. . . testing Davenport made changes to the source code . . . on his home computer. Davenport did not . . . at any time . . . bring the . . . source code to work or copy it onto Air Force computers.
Davenport began sharing . . . the AUMD program with other colleagues. At first, [he] shared [it] . . . by giving them a computer disk containing the program or . . . installing the program on their computers. Later, [he] posted the AUMD program on an Air Force web page so that Air Force manpower personnel could download it directly. As the program became popular within the Air Force manpower community, Davenport's superiors asked him to train additional personnel in its use. . . . [H]e continued to modify the program based on feedback . . . and . . . improved its functionality and eliminated programming errors. At some point, Davenport added an automatic expiration date to each new version of the AUMD program so that users were required to download the newest version when the older one expired.
In September 1998, Davenport gave a presentation to senior Air Force manpower officers . . . and. . . `absolutely sold his audience’ on the AUMD program. . . .
[T]he Air Force . . . decided it was becoming too dependent on Davenport for access to the program. . . . Davenport's superiors asked him to turn over the source code . . . which Davenport . . . kept on his home computer. When he refused . . ., his superiors threatened him with a demotion and a pay cut, and excluded him from the Manpower User Group's advisory authority.
Davenport assigned all his rights in the AUMD program to Blueport, [which offered to license the program to the Air Force.] The Air Force refused . . . and solicited other contractors to recreate the AUMD program. The Air Force . . . contracted with Science Applications International Corporation. At the request of the Air Force, SAIC programmers modified the AUMD program's object code to extend its expiration date. This modification allowed Air Force . . . personnel to continue to use the AUMD program despite Davenport's refusal to provide the source code.
Blueport sued the government, claiming the Air Force (i) infringed its copyright in the AUMD program and (iii) violated the DMCA by extending the expiration date in the AUMD program's object code, thus circumventing measures took Blueport to prevent its unauthorized use. The government moved to dismiss the case for lack of jurisdiction, and the Court of Federal Claims (the CFC is the court you go to if you’re suing the federal government) granted the motion. Blueport appealed to the Federal Circuit Court of Appeals, which is what this opinion is about.
Jurisdiction, as I’ve noted before, is a court’s power to hear and decide a particular case. Courts are usually assumed to have power to decide a particular case, and will do so unless there is some reason why a particular defendant is immune from suit. In the U.S., for example, law enforcement officers have a qualified immunity from suit in cases claiming, say, excessive use of force; that means they are presumed to be immune, but the plaintiff can eliminate that immunity by showing there’s a reason it shouldn’t apply in this particular case. What the U.S. government did in this case is similar: It raised the issue of sovereign immunity from suit.
Sovereign immunity is a neat little principle that derives from English common law. It says sovereigns – the U.S. government, a state government, a county, local or city government – cannot be sued unless they agree. It apparently dates back to the time when the king “could no wrong,” literally, because the king (the sovereign) basically owned everything and everyone and was, after all, the one who made the laws. If you make the laws, you pretty much don’t have to let yourself be sued for breaking them.
In this case, the U.S. government didn’t deny (or admit) any of the claims Blueport made; it simply said it could not be sued because it hadn’t waived its sovereign immunity from suit. The statute at issue was 18 U.S. Code § 1498(b) -- the statute in which the U.S. government waives some of its sovereign immunity in copyright cases. In § 1498(b), the federal government agrees to be sued for copyright infringement and/or under the DMCA EXCEPT when certain circumstances exist. Here are the two provisions that were at issue in the Blueport case:
[A] Government employee shall have a right of action against the Government under this subsection except where he was in a position to order, influence, or induce use of the copyrighted work by the Government:
[T]his subsection shall not confer a right of action on any copyright owner . . . with respect to any copyrighted work prepared by a person while in the employment or service of the United States, where the copyrighted work was prepared as a part of the official functions of the employee, or in the preparation of which Government time, material, or facilities were used.
18 U.S. Code § 1498(b).
Can you see where this is going? The government argued that it had not waived sovereign immunity in this case because (i) Davenport was in a position to “order influence, or induce use of” the AUMD program by the government; and/or (ii) he prepared it as part of his official duties and/or used government time, material and facilities to do so. Blueport Co. v. U.S. supra.
Like the lower court (the CFC), the Federal Circuit Court of Appeals held that the first option applied, which meant the government had not waived its sovereign immunity in this case (and it didn’t need to consider the second option).
[T[he CFC found that Davenport's position as a member of the Air Force manpower community gave him access and authority to distribute the AUMD program freely to his colleagues. . . . [T]he CFC found that Davenport distributed the AUMD program both by sharing individual copies with his colleagues and by posting the program on an Air Force web page so that . . . people in the Air Force manpower community could access it. The CFC also found that Davenport demonstrated the AUMD program to senior Air Force manpower personnel and was part of the Manpower User Group's advisory authority. . . . In addition, the CFC concluded . . . Davenport was in a position to influence and induce the Air Force's use of the program. We agree. Because Blueport's rights in the AUMD program are derived from Davenport, . . . Blueport's copyright infringement claim against the Government is precluded by the `order, influence, or induce’ proviso.Blueport Co. v. U.S. supra. That was the court’s decision on the ordinary copyright infringement claims. As to the DMCA circumvention claim, it found that nothing in the DMCA waives the government’s sovereign immunity from suit. Blueport Co. v. U.S. supra. Since there is no waiver in the DMCA, the government cannot be sued under that statute, either. Blueport Co. v. U.S. supra. So Blueport can’t sue, at all . . . it’s over.
It’s good to be the king.