I’ve written extensively here and elsewhere (especially) about how we need to use legal rules to hold individuals and entities liable for not taking reasonable efforts to secure their computer systems. The goal, I argue, is to alter our current culture, to create a climate in which we – the individual end-users and the entity intermediate-, originating-, whatever-users – take security seriously and take it as our individual and collective responsibility.
I’ve just had an object lesson in how far we have to go to achieve that . . . a lesson in humility, maybe . . . or maybe just a good, solid dose of early twenty-first century reality.
I’m a professor at a law school, which is part of a university. Like all law schools, ours is a separate operational unit for most purposes, including internal technology. We do, though, rely on the university’s technical staff for certain things, some of which implicate computer security. That’s about all I’m going to say about organizational responsibilities because my purpose here is not to get anyone into trouble – it is, as I said earlier, simply to recount my recent encounter with reality.
At home, I have my own laptop, my own software, my own security arrangements, etc. At the law school, I use a law school-provided laptop which runs law school-provided software (via university arrangements) and I access the Internet via the law school’s wired connection, which has firewalls (sometimes very annoying firewalls and filters, I might add) and other security measures. My laptop has antivirus software provided by a major, reputable company, which I will not identity because what happened is not the fault of their product – it is, as is so often true, attributable to human factors.
My laptop antivirus software updates itself, and I routinely run a virus scan on the laptop at least once a week (more, depending on how often and how long I’m there). I ran a virus scan on Monday and came back to find that it had found a Trojan horse program but was unable to do anything with it – couldn’t delete it, couldn’t quarantine it, nada. I found that peculiar, so I went to the tech staff.
They responded promptly, ran the laptop in safe mode, ran the antivirus software, found the Trojan, deleted it. All was good, till the next day, Tuesday, when the Trojan showed up again, same message, same futile efforts by the antivirus software. So, back I go to the tech staff. They weren’t sure what to do, researched the matter, and decided the problem was that running the antivirus software in safe mode didn’t clear the Trojan from the registry (though now that I think about it, why would running the program in safe mode let it do what it could not do in regular mode?), so a very nice tech person did that while I was out teaching a class.
I come in yesterday, and run into the nice tech person in the hall. I’ve really begun to wonder why the antivirus software had such a hard time with the Trojan, so after he tells me they cleaned the registry, the Trojan is really gone and all is good, I ask about that.
I’m told that the program the law school uses (via the university) has had two upgrades in the last year, neither of which made it to my laptop. The effects of the first upgrade were apparently not that dramatic, so we’ll let that one go.
The second upgrade, which was implemented some months (4? 5? 6?) ago left the software on my laptop incapable of updating itself . . . so for some months I have been running a laptop from my office the antivirus software of which was increasingly out of date. Neither the notice that there was an upgrade or the upgrade itself ever percolated down to me . . . which makes me wonder how many other law school users it missed. (Note: This is not intended as an invitation to would-be law school hackers.)
Again, my point here is not to cause trouble for the good people who work in computer security at my law school and at my university.
My point is simply anecdotal . . . simply a personal experience with how completely out of whack our culture is with the need to secure systems . . . and KEEP them secure.
In a completely different context, someone said our grand jury system is “alchemical” in its function . . . by which they meant that we put together a group (12, 16, 23) of people, wave a set of proposed charges (an indictment) at them, which they almost instantaneously approve and we have a criminal case. The point was that nothing really happens, in terms of having the grand jurors actually assess the merits of the indictment – that the process is almost purely symbolic.
I’m beginning to wonder if a lot of the exercise about computer security isn’t alchemical, in the same sense. Effort happens, and that’s supposed to count, somehow.
This is one of those days when, if I were a gambler, I’d definitely be putting my money on the cybercriminals.
Thursday, September 28, 2006
Tuesday, September 26, 2006
Can You Hack an Unsecured Computer?
This is a follow-up, in some ways, to my post about holding people criminally liable for not securing their computer systems.
I noticed that Germany is revising their computer crime laws somewhat, and that reminded me of a distinct aspect of the German Criminal Code’s approach to obtaining unauthorized access to computer data. Section 202a of the German Criminal Code makes it a crime to obtain data from a system without authorization if the system was “specially protected against unauthorized access”.
New York has a similar provision. Section 156.04 of the New York Penal Code makes it a crime for someone “knowingly” to use or cause “to be used a computer . . . without authorization and the computer utilized is equipped or programmed with any device or coding system, a function of which is to prevent the unauthorized use of said computer”. This provision has been used to dismiss charges of hacking. In People v. Angeles, 687 N.Y.S.2d 884 (N.Y. City Crim, Ct. 1999), an employee of a NY car service was charged with gaining unauthorized access to data in a computer owned and operated by the car service in violation of section 156.04. He moved to dismiss the charges, pointing out that the computer in question was not password-protected or otherwise “equipped or programmed with any device . . . to prevent the unauthorized use” of the computer. The court agreed, and dismissed the charges.
So, in New York and Germany (and the Netherlands), you can’t hack an unsecured computer.
The purpose in each instance, as I understand it, was to filter unauthorized access cases – to keep police from having to respond if the owner of the computer in effect left the door to the computer wide open. The implicit premise seems to have been, as I think they used to say in an old TV cop show, “take care of yourself out there.”
We don’t do this in the real-world, at least not in MOST of the US (New York is an exception). If I am foolish enough to leave my house with the front door standing wide open and my new laptop sitting on a table just inside the door, my irresponsibility (stupidity?) in no way undermines my right to expect that the police will investigate the crime. We simply do not incorporate victim fault into our criminal law (though we do in tort law). So the police can’t say to me, “sorry, but you shouldn’t have left the door open. We’re not going to waste our time tracking down the perp when you didn’t do anything to prevent the crime.”
Right now, in most of the US the real-world rule applies in the online context . . . which, aside from anything else, creates some difficult conceptual issues when it comes to wireless networks. Unlike my house with the open door – which is a quintessentially passive situation – an unsecured wireless network is “active,” in that it casts a web outside my house and, some would argue, essentially “invites” unauthorized persons to use it. So, there continues to be quite a debate about whether or not it is hacking to free-ride on an unsecured wireless network, i.e., simply to make use of the network to go online.
If we required wireless network owners to secure their systems, then the analysis would be much easier. I think I read that a New York town is doing this. I can’t find the story just now, but I believe I read earlier this year that a New York town adopted an ordinance which required people running wireless networks to secure them. (I think users who did not secure their systems faced a fine in that instance, presumably because the NY statute would already bar charges for unauthorized access to an unsecured wireless network.)
Interesting issue: If you don’t bar the virtual door is it a crime for someone to enter it?
I noticed that Germany is revising their computer crime laws somewhat, and that reminded me of a distinct aspect of the German Criminal Code’s approach to obtaining unauthorized access to computer data. Section 202a of the German Criminal Code makes it a crime to obtain data from a system without authorization if the system was “specially protected against unauthorized access”.
New York has a similar provision. Section 156.04 of the New York Penal Code makes it a crime for someone “knowingly” to use or cause “to be used a computer . . . without authorization and the computer utilized is equipped or programmed with any device or coding system, a function of which is to prevent the unauthorized use of said computer”. This provision has been used to dismiss charges of hacking. In People v. Angeles, 687 N.Y.S.2d 884 (N.Y. City Crim, Ct. 1999), an employee of a NY car service was charged with gaining unauthorized access to data in a computer owned and operated by the car service in violation of section 156.04. He moved to dismiss the charges, pointing out that the computer in question was not password-protected or otherwise “equipped or programmed with any device . . . to prevent the unauthorized use” of the computer. The court agreed, and dismissed the charges.
So, in New York and Germany (and the Netherlands), you can’t hack an unsecured computer.
The purpose in each instance, as I understand it, was to filter unauthorized access cases – to keep police from having to respond if the owner of the computer in effect left the door to the computer wide open. The implicit premise seems to have been, as I think they used to say in an old TV cop show, “take care of yourself out there.”
We don’t do this in the real-world, at least not in MOST of the US (New York is an exception). If I am foolish enough to leave my house with the front door standing wide open and my new laptop sitting on a table just inside the door, my irresponsibility (stupidity?) in no way undermines my right to expect that the police will investigate the crime. We simply do not incorporate victim fault into our criminal law (though we do in tort law). So the police can’t say to me, “sorry, but you shouldn’t have left the door open. We’re not going to waste our time tracking down the perp when you didn’t do anything to prevent the crime.”
Right now, in most of the US the real-world rule applies in the online context . . . which, aside from anything else, creates some difficult conceptual issues when it comes to wireless networks. Unlike my house with the open door – which is a quintessentially passive situation – an unsecured wireless network is “active,” in that it casts a web outside my house and, some would argue, essentially “invites” unauthorized persons to use it. So, there continues to be quite a debate about whether or not it is hacking to free-ride on an unsecured wireless network, i.e., simply to make use of the network to go online.
If we required wireless network owners to secure their systems, then the analysis would be much easier. I think I read that a New York town is doing this. I can’t find the story just now, but I believe I read earlier this year that a New York town adopted an ordinance which required people running wireless networks to secure them. (I think users who did not secure their systems faced a fine in that instance, presumably because the NY statute would already bar charges for unauthorized access to an unsecured wireless network.)
Interesting issue: If you don’t bar the virtual door is it a crime for someone to enter it?
Tuesday, September 19, 2006
Using Your Office Computer to Access Pornography
Last July the Florida Supreme Court announced it was going to reprimand Judge Brandt C. Downey for, among other things, habitually “viewing pornographic websites” on the computer in his chambers. The case is Inquiry Concerning a Judge (Florida Supreme Court SC 05-2228) (July 13, 2006), 2006 WL 1911389.
In its opinion, the Florida Supreme Court also notes that the judge’s “pervasive practice of viewing pornography” from the computer in his chambers “resulted in frequent computer viruses infecting” the computer. This, in turn, meant that courthouse staff had to remove the viruses from his computer.
The opinion notes that “on at least two occasions, courthouse personnel were unwittingly exposed to pornographic images when they reported” to the judge’s chambers to remove viruses from his computer. The opinion also notes that the judge “repeatedly ignored e-mail warnings . . . from court technology staff” which advised him “of the potential risk to the entire computer network due to [his] viewing of certain websites.”
The Florida Supreme Court found that these allegations, if true, violated Canon 1 of the Code of Judicial Conduct. Canon 1 basically says that a judge should maintain and enforce “high standards of conduct” and should personally observe those standards himself or herself. The judge admitted to violating Canon 1, and this violation became part of the basis for his being reprimanded.
I find this case interesting for several reasons, one of which is that what the judge was doing on his office computer is, I suspect, far from unusual in the American workplace. We are given computers to use at work, and I think the line between “their” computer (which is to be used only officially, for company business) and “our” computer tends to blur in our minds . . . so we think nothing of using “their” computer for “our” own purposes.
(Indeed, I am doing that right now – I am writing this blog post on the laptop in my office at the law school where I teach. Writing a blog post is definitely not as far afield from my employment obligations as the judge’s looking at porn in his chambers, but it’s pretty clearly not in my job description, either.)
A lot of this is inevitable. We spend a lot of time at work – are we supposed to be offline for the entire time, or is it reasonable for us to use our work computers (and our work email addresses) for “personal” reasons?
That question raises a lot of interesting issues. Some of the things we do with “their” computers while we’re at work don’t hurt anyone or anything, but some of what we do can “harm” the company we work for. The judge was exposing the court’s computer system to viruses. He was not doing this intentionally, but his activity still had that effect. It doesn’t seem as if the court computer system was seriously compromised by the viruses, but the viruses could have interfered with other employees’ ability to use the system, and the court may have incurred expenses in having the viruses removed.
What happened to the judge raises another issue: He was disciplined because the judicial system found that his recreational use of his office computer was not consistent with the standards of behavior we require of judges. What standards, I wonder, do we require of the rest of us?
Do I have an obligation to use my office computer in a way that minimizes its exposure to viruses and other evils? If so, how far does that obligation extend – am I supposed to educate myself about the dangers that lurk online so I can more effectively avoid them? Or is the security of my computer and the system it is linked to purely the concern of our computer staff?
More importantly, perhaps, how am I supposed to know? The judge got into trouble because he was bound by an external set of standards – the Code of Judicial Conduct. What, if anything, is supposed to put the rest of us on notice as to the responsibility, if any, we have to use our office computers in a “responsible” manner?
Friday, September 15, 2006
Hold People Liable for Cybercrime?
This is, I hope, going to be a relative short but provocative post.
Elsewhere, I have analyzed the necessity and viability of holding the “users” of technology --- you and me – criminally liable for not preventing cybercrime, at least under certain conditions and subject to certain constraints.
I am not going to go into detail on what I have written elsewhere; if you want a longer version, you can find it here and here.
As I explain in those and other articles, our current model of law enforcement (police react to a completed crime, investigate, identify and apprehend the perpetrator, who is then prosecuted, convicted and sanctioned . . . which takes him/her out of commission and deters others from following his/her example) is not very effective for cybercrime.
It is not particularly effective for cybercrime because the model assumes territorial crime, that is, it assume that the victim(s) and perpetrator(s) are in some physical proximity when the crime is committed. This, in turn, means that:
- they’re in the same jurisdiction, the same country, so the country’s laws clearly apply and the country clearly has jurisdiction to prosecute;
- physical proximity means there is trace evidence at the crime scene (think CSI) and that individuals located in the area of the crime are likely to have seen things that can help identify the perpetrator;
- the perpetrator may even be known, locally, which helps with identification;
- once identified, the perpetrator can be apprehended with relative ease.
- be anonymous or pseudonymous;
- commit crimes across national borders (maybe across several national borders); and
- commit crimes on a much larger scale (real-world crime tends to be sequential, cybercrime tends to be simultaneous and cumulative).
So, I argue, we need to move to a model that ALSO emphasizes prevention . . . which is where we come in. Currently there is no legal obligation to secure systems and otherwise frustrate cybercriminals. Currently, criminal law does not take the negligence or recklessness of the victim into account – if I leave my keys in my car and it’s stolen, that’s still a crime. There is no consequence of assumed risk. My negligence in leaving the keys there and creating an opportunity for a car thief has no consequences in criminal law because a crime is not “against” me, it’s “against” the state . . . it’s not personal, it’s a matter of social control.
In the articles I noted above, I argue that we should change this in two basic ways: One deals with crimes in which the person who didn’t secure their system is the only victim; the other deals with crimes in which the perpetrator used computers their owners (A and B, say) had not secured to attack others (C and D, say, to keep it simple).
I argue that we should use a form of assumed risk for the first scenario, the one in which the owner of the system is the only victim. What this could mean (it could be structured in various ways) is that law enforcement would have no obligation to investigate the crime and try to apprehend the perpetrator; they could if they wanted to (because crime is an offense against the state), but they would be free to ignore it if they concluded that the injury was only to the person who, in a sense, allowed it to be inflicted.
We could use a modified version of accomplice liability to address the second scenario – the consequent victimization scenario. Here, A’s and B’s respective negligence resulted in the infliction of “harm” on C and D, who, we are assuming, did nothing wrong. Since A and B contributed to the commission of the crimes against C and D, then could be held criminally liable for facilitating those crimes. It would probably be a low level of criminal liability and a low penalty (maybe only a fine, maybe community service).
The goal in both instances is to change behavior, to bring home to people that there are consequences of not securing their systems. I think the current state of complacence with regard to securing (or not securing) computers is a function of our implicitly assuming that crime is the sole province of the police. (It may also be in part attributable to the fact that people don't see cybercrime as "real" crime . . . not as the kind of crime that warrants alarm systems and burglar bars.)
It did not used to be that way; crime control used to be partially, and even primarily, a civilian, community function. The police-only model of crime control has only been the dominant model for about a hundred and fifty years, since Sir Robert Peel invented the professionalized police force in nineteenth century London.
Maybe it’s time we realized that technology is changing our world and that we can't rely only on old assumptions.
Or maybe I’m completely off base.
Thursday, September 14, 2006
Defamation
Defamation -- or libel -- was a crime at English and, later, American common law.
The justification for making it a crime was that it tended to cause a "breach of the peace" -- which originally meant that the person defamed and the one who made the defamatory statement might get into a duel.
Later, it seemed to mean merely that they might fight or otherwise engage in disruptive behavior.
In 1961, the drafters of the Model Penal Code -- which is the template for criminal law in this country -- decided defamation should not be criminalized. They said it was the most difficult decision they faced in updating and streamlining American criminal law. Their primary reason for not criminalizing defamation is that it was not necessary because anyone injured by defamation could file a civil suit; if the claim was valid, the plaintiff could recover damages, which would be enough to make up for the "harm" caused by the defamation.
The drafters of the Model Penal Code made this decision long before there was an Internet, in a world in which defamation was "published," if at all, by deep-pocket entities: newspapers, magazines, television stations, radio stations and, sometimes, movies. In that world, these mass-media outlets had an incentive to vet -- to filter -- what they published in order to avoid being held civilly liable for defamation.
It's now 45 years later and "publication" has become much more democratic. With the Internet, we can "publish" whatever we want (except for child pornography and a few other outlawed items). This, I think, calls into question the assumption the drafters of the Model Penal Code made in not criminalizing libel.
Since we do not have to rely on a mass-media outlet to "publish" material, the filter editors at newspapers, magazines, TV and radio stations provided is gone. And so is the likelihood that an injured party can recover damages -- most people who post material online are what we in the law call "judgment-proof." That is, you may be able to get a $1,000,000 (or $1,000,000,000) damage award against them, but it's functionally meaningless. They will never be able to pay up.
Maybe I should use a case to illustrate what I mean: A few years ago, Daniel Curzon-Brown, an reportedly an “openly gay” professor at City College in San Francisco, sued Ryan Lathouwers, founder and webmaster of Teacher Review, a site that posted “anonymous comments about . . . faculty members.” Curzon-Brown sued for defamation after he was the subject of “postings that use[d] the word `faggot’ frequently, and allege[d] that he raped and molested students in exchange for better grades.” One posting claimed he had sex with a student in the classroom; another accused him of killing a student. A year later, Curzon-Brown agreed to dismiss the suit and to pay the American Civil Liberties Union [ACLU] $10,000 in attorneys’ fees; the ACLU represented Lathouwers, who could not afford private counsel. According to news stories, had Curzon-Brown not agreed to dismiss, the ACLU would have moved for dismissal on the grounds that Lathouwers was statutorily immune from suit under the Communications Decency Act of 1996, and would have sought $100,000 in attorneys’ fees.
The Communications Decency Act “overrides the traditional treatment of publishers. . . . ‘such as newspapers, magazines or television and radio stations, all of which may be held liable for publishing . . . obscene or defamatory material written or prepared by others.’” Concerned about lawsuits inhibiting free speech online, Congress added Section 230(c)(1) to title 47 of the U.S. Code. It provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
The effect of this provision is to immunize those who, like Lathouwers, post content that is provided by another, such as the individuals who submitted the postings about Curzon-Brown. At least one court has found that the immunity applies even though the operator of the site “exercises some editorial control over the anonymous postings.” A case argued a week ago before the California Supreme Court asks the court to find that the immunity conferred by this section does not apply, in least when certain conditions are met. Reports of the oral arguments in the case suggest, though, that this court will not inclined to do so (as, I would submit, it cannot under the federal statute).
The amount and variety of material that is posted online continues to generate efforts to hold someone civilly liable for what the object of a posting believes is false (maybe maliciously false) information. Todd Hollis, for example, is a lawyer in Pittsburgh. He is suing Dontdatehimgirl.com after three anonymous women posted distinctly unflattering comments about his allged behavior in his relationships with them. The site operator, of course, is relying on the provision quoted above -- the CDA section that immunizes the operator of a website, like dontdatehimgirl.com, which merely posts comments published by others.
I could give a number of other examples, some involving respectable websites, others involving more suspect conduct (like a man's pretending to be his former boss and posting an ad in her name -- using her home address, phone number and email address -- on a site where bored wives seek "sexual adventure" with other men). There's really no point, though, because the issues are exactly the same: Someone claims to have been defamed by what was posted online and wants redress (revenge). They can sue the poster (if they can identify him or her, a problem Todd Hollis apparently has in his dontdatehimgirl.com case), but he or she probably won't have enough money to pay the plaintiff's attorney fees. And they can't sue the site operator.
So what disincentive do we have to keep people from defaming others online? It doesn't seem we really have one, at least not unless the poster (i) identifies himself or herself, (ii) is in the same jurisdiction as the victim (which makes suing a much more viable option) and (iii) has enough money to pay a substantial damage award (or at least pay the plaintiff's attorney's fees if he/she wins).This is leading some to call for re-criminalizing defamation. If we were to do that, we would have to be very careful in how we defined criminal online defamation, because of the First Amendment, if nothing else.
Another factor the drafters of the Model Penal Code cited in not criminalizing libel is that defamation -- most of it, anyway -- inflicts a low-level of "harm." They specifically said that the use of the criminal sanction would not be appropriate for those who merely spread "gossip" and rumors. Maybe that factor still applies -- maybe we just have to toughen up and deal with having things that were said behind our backs broadcast to the world . . . . Or maybe not . . . ?
Sunday, September 03, 2006
Child pornography: real and pseudo
As I assume everyone knows, the possession, distribution and/or creation of child pornography is a crime in the U.S. and in many other countries, including the United Kingdom.
I want to talk generally about the criminalization of child pornography, why we have it, what it encompasses, what it does not encompass, etc., but I want to begin with a recent case from the UK.
Stafford Sven Tudor-Miles of Easton, Middlesbrough in the UK, recently pled guilty to (a) five counts of attempting to make indecent pseudo-photographs of children and (b) one count of possessing indecent pseudo-photographs of children.
What did he really do? What he did, and please don’t ask me why, was to scan “photographs of adult porn stars into his computer and used sophisticated digital equipment to reduce the size of their breasts.” I assume the photos had the porn stars engaged in some of their professional activity.
Tudor-Miles’ attorney argued that no crime had been committed because the pictures were really those of adult women. That defense apparently did not work. According to the story in the TimesOnline, under the UK’s “Protection of Children Act 1978, as amended by the Criminal Justice and Public Order Act 1994, a pseudophotograph of a child is defined as an image, whether made by computer graphics or otherwise, which appears to be that of a child.” And UK law treats such an image “as showing a child even if some of the physical characteristics are those of an adult.”
Let’s talk for a minute about how this case would be handled under US law, and then we’ll analyze the result.
Our First Amendment protects speech, except in certain, very limited instances. The Supreme Court recognized, in New York v. Ferber, 458 U.S. 761 (1978), that child pornography is speech within the compass of the First Amendment, but held it can be criminalized for two reasons: One is that children are harmed – physically and emotionally – in the creation of child pornography. The other is that the child pornography is a permanent record of the “harms” inflicted on the children, and this record can remain in essentially permanent circulation. The Supreme Court found that the infliction of these two “harms” overrode First Amendment considerations.
(I often analogize this to the concept of a snuff film. If anyone were idiotic enough to make a First Amendment argument to support the creation and possession of a snuff film, the argument would fail because of the “harm” – the death of a human being – involved in creating the film.)
The Ferber Court was talking about “real” child pornography – child pornography involving the use of “real” children. Computer technology is making it possible, at some level, to create “virtual” child pornography – either by morphing existing images of adults, as Tudor-Miles did, or by using CGI to create the images from scratch. Now, computer technology still is far from the point at which a CGI image is indistinguishable from that of a real person, or a real child, but you can create a good simulacrum.
In 2002, in Ashcroft v. Free Speech Coalition, 535 U.S. 234, the Supreme Court struck down the federal statute that criminalized virtual child pornography. More specifically, the statute made it a crime to create, possess and/or distribute any image “that is, or appears to be, of a child engaging in sexually explicit conduct.” Since “real” children are not involved in the creation of virtual child pornography, the government could not rely on Ferber.
Instead, it said there are two other reasons why virtual child pornography should not be protected by the First Amendment (and can therefore be criminalized): One is that it “whets the appetites” of pedophiles. The other is that pedophiles use child pornography to seduce children into sexual activity. The Supreme Court rejected both.
As to the “whets the appetite” argument, it found this claim was too broad. As the Court noted, the “mere tendency of speech to encourage unlawful acts is not a sufficient reason for banning it.” If that were true, we’d have no Grand Theft Auto, no slasher flicks, no “caper” movies, none of that.
As to the premise that pedophiles use child pornography to seduce children, the Court found that equally unpersuasive. It noted that other things – “cartoons, video games, and candy” – would be at least as effective for this purpose, but we do not ban them. The Court concluded that the government cannot “ban speech fit for adults simply because it may fall into the hands of children.”
Congress quickly adopted another statute, which pretty much does the same thing as the one the Court struck down, but it has not, to my knowledge, been challenged as yet. I think the new statute is unconstitutional for the same reasons the first one was.
Which brings me back to the issue I wanted to raise: Is there any reason to criminalize virtual child pornography, i.e., pornography that appears to involve children engaging in sexual activity but really does not?
Child pornography is by definition not obscene. We have other statutes (which are also, I think, constitutionally problematic) that ban obscene material. Child pornography is material that is not obscene, that would be mere pornography if it involved adults. It has been criminalized for the reasons given in Ferber.
Why, if at all, does it make sense to prosecute and incarcerate Tudor-Miles for making fake child pornography? Why, if at all, would it make sense to prosecute someone who used next-generation computer technology to create what seems to be child pornography but is really just the depiction of activity by computer-generated images?
There’s a case from Canada, that went all the way to the Canadian Supreme Court, which involved “textual child pornography.” Canadian police seized a CD from the home office of a fellow; on it were stories he had written that featured children engaged in sexual activities (with, I believe, adults). He was prosecuted for possessing the stories, ones he had written and that he had not distributed. The Canadian Supreme Court held, basically, that the charge was improper, because nothing “real” was involved – the stories were, as the court said, mere fantasies, the products of his imagination.
So, is virtual child pornography mere fantasy and, as such, something the law should not condem? Or is there a good reason to go after people like Tudor-Miles?
I want to talk generally about the criminalization of child pornography, why we have it, what it encompasses, what it does not encompass, etc., but I want to begin with a recent case from the UK.
Stafford Sven Tudor-Miles of Easton, Middlesbrough in the UK, recently pled guilty to (a) five counts of attempting to make indecent pseudo-photographs of children and (b) one count of possessing indecent pseudo-photographs of children.
What did he really do? What he did, and please don’t ask me why, was to scan “photographs of adult porn stars into his computer and used sophisticated digital equipment to reduce the size of their breasts.” I assume the photos had the porn stars engaged in some of their professional activity.
Tudor-Miles’ attorney argued that no crime had been committed because the pictures were really those of adult women. That defense apparently did not work. According to the story in the TimesOnline, under the UK’s “Protection of Children Act 1978, as amended by the Criminal Justice and Public Order Act 1994, a pseudophotograph of a child is defined as an image, whether made by computer graphics or otherwise, which appears to be that of a child.” And UK law treats such an image “as showing a child even if some of the physical characteristics are those of an adult.”
Let’s talk for a minute about how this case would be handled under US law, and then we’ll analyze the result.
Our First Amendment protects speech, except in certain, very limited instances. The Supreme Court recognized, in New York v. Ferber, 458 U.S. 761 (1978), that child pornography is speech within the compass of the First Amendment, but held it can be criminalized for two reasons: One is that children are harmed – physically and emotionally – in the creation of child pornography. The other is that the child pornography is a permanent record of the “harms” inflicted on the children, and this record can remain in essentially permanent circulation. The Supreme Court found that the infliction of these two “harms” overrode First Amendment considerations.
(I often analogize this to the concept of a snuff film. If anyone were idiotic enough to make a First Amendment argument to support the creation and possession of a snuff film, the argument would fail because of the “harm” – the death of a human being – involved in creating the film.)
The Ferber Court was talking about “real” child pornography – child pornography involving the use of “real” children. Computer technology is making it possible, at some level, to create “virtual” child pornography – either by morphing existing images of adults, as Tudor-Miles did, or by using CGI to create the images from scratch. Now, computer technology still is far from the point at which a CGI image is indistinguishable from that of a real person, or a real child, but you can create a good simulacrum.
In 2002, in Ashcroft v. Free Speech Coalition, 535 U.S. 234, the Supreme Court struck down the federal statute that criminalized virtual child pornography. More specifically, the statute made it a crime to create, possess and/or distribute any image “that is, or appears to be, of a child engaging in sexually explicit conduct.” Since “real” children are not involved in the creation of virtual child pornography, the government could not rely on Ferber.
Instead, it said there are two other reasons why virtual child pornography should not be protected by the First Amendment (and can therefore be criminalized): One is that it “whets the appetites” of pedophiles. The other is that pedophiles use child pornography to seduce children into sexual activity. The Supreme Court rejected both.
As to the “whets the appetite” argument, it found this claim was too broad. As the Court noted, the “mere tendency of speech to encourage unlawful acts is not a sufficient reason for banning it.” If that were true, we’d have no Grand Theft Auto, no slasher flicks, no “caper” movies, none of that.
As to the premise that pedophiles use child pornography to seduce children, the Court found that equally unpersuasive. It noted that other things – “cartoons, video games, and candy” – would be at least as effective for this purpose, but we do not ban them. The Court concluded that the government cannot “ban speech fit for adults simply because it may fall into the hands of children.”
Congress quickly adopted another statute, which pretty much does the same thing as the one the Court struck down, but it has not, to my knowledge, been challenged as yet. I think the new statute is unconstitutional for the same reasons the first one was.
Which brings me back to the issue I wanted to raise: Is there any reason to criminalize virtual child pornography, i.e., pornography that appears to involve children engaging in sexual activity but really does not?
Child pornography is by definition not obscene. We have other statutes (which are also, I think, constitutionally problematic) that ban obscene material. Child pornography is material that is not obscene, that would be mere pornography if it involved adults. It has been criminalized for the reasons given in Ferber.
Why, if at all, does it make sense to prosecute and incarcerate Tudor-Miles for making fake child pornography? Why, if at all, would it make sense to prosecute someone who used next-generation computer technology to create what seems to be child pornography but is really just the depiction of activity by computer-generated images?
There’s a case from Canada, that went all the way to the Canadian Supreme Court, which involved “textual child pornography.” Canadian police seized a CD from the home office of a fellow; on it were stories he had written that featured children engaged in sexual activities (with, I believe, adults). He was prosecuted for possessing the stories, ones he had written and that he had not distributed. The Canadian Supreme Court held, basically, that the charge was improper, because nothing “real” was involved – the stories were, as the court said, mere fantasies, the products of his imagination.
So, is virtual child pornography mere fantasy and, as such, something the law should not condem? Or is there a good reason to go after people like Tudor-Miles?
Subscribe to:
Posts (Atom)