As part of my effort to learn about the history of cybercrime, I just read a book published in 1973: Computer Crime: How A New Breed of Criminals Is Making Off With Millions, by Gerald McKnight.
Like the other early books about computer crime, McKnight's book focuses solely on "mainframe computer crime." Most of the book deals with how crimes can be committed by using mainframe computers, but it also addresses an issue I had never thought about: attacks on computers.
By "attacks on computers" I don't mean the kind of virtual assaults we think of when we hear the phrase "attacks on computers:" hacking into a computer system without authorization; launching a Denial of Service attack to shut down a system; and launching malware to cripple systems, erase data or wreak other kinds of havoc. No, I mean physical attacks on computers.
McKnight describes what he sees as an inevitable trend toward computer sabotage as "the most serious threat facging the electronic society". (Computer Crime, p. 83.) He attributes the trend, in part, to what he says is "our simple fear of being `taken over.' . . . the worry that the computer may turn into a monster. Get out of control." (Computer Crime, p. 83.) He also attributes it to our "fear of this metal beast which has come to take jobs from men". (Computer Crime, p. 100.) And he at least implicitly suggests it may reflect our "intuitive fear" of having to compete with a new type of life: "`a form of machiine life that will be the outgrowth of today's computer science.'" (Computer Crime, p. 98.)
McKnight devoted one chapter to the May, 1970 attempt to blow up the New York University computer center, arguing that it was a manifestation of this fear, and claiming that the episode "served a useful purpose in this respect: it gave us a warning." (Computer Crime, p. 95.) In a later chapter, McKnight described other attacks on computers and explains that these incidents in which "a human-being expresses in violence deeply repressed feelings of hositility toward the computer" are an indication that some portion of mankind, anyway, "is in subconscious revolt against the machine." (Computer Crime, p. 105.) He cautioned that while computer saboteurs were "not yet organized," they "should be regarded as the outriders of a growing guerrilla force." (Computer Crime, p. 105.)
McKnight noted that in the spring of 1972 a number of computers were bombed in New York, and suggests that these bombings were another empirical indication of the growing hostility against computers. (Computer Crime, p. 105-106.) He speculated that "Electronic Luddities" may someday "systematically seize and . . . destroy . . . the vital computers controlling national power grids and other services". (Computer Crime, p. 113.)
I find McKnight's speculations in this regard fascinating . . . as something that might, perhaps, have come to pass, but has not, presumably because of the development and proliferation of the personal computer and the Internet.
The personal computer gave everyone access to computing power that far exceeds what was available to businesses and government agencies thirty years ago, when McKnight wrote. This democratization of computer technology effectively eliminated the possibility that humans would perceive computers as a threat and would strike back at them. It prevented computers' becoming the sole province of an elite technocracy and perceived as instruments of oppression. Instead, personal computers became a tool for the masses, in a fashion analogous to the telephone, radio and television. We are well on our way to becoming addicted to computers even though, as McKnight noted, they do take over tasks that were once performed by people.
The Internet also altered the playing field: When McKnight wrote, computers were stand-alone mainframes. If, as he describes, someone threw a bomb into a mainframe or physically attacked it in some other way, the mainframe could be crippled or even destroyed. That would have the cumulative effect of destroying the machine -- the object McKnight postulates as the target of human hostility -- and of eradicating the data held on the machine. With the development of the Internet and the proliferation of networked computers, the destruction of a single computer would be a far more futile and consequently far less satisfactory act. The destroyed computer would almost certainly not be the only repository of the data it held; the data should be available from other computers with which the victim computer was networked, and should also be archived in some other storage area.
The proliferation of the network also had a psychological effect: If someone were to bomb or otherwise destroy "a" computer today, the event could not have the visceral satisfaction the people McKnight describes must have felt when they destroyed a mainframe. They destroyed "the" computer. Now, if someone were to bomb a computer hooked to the Internet, it would be the equivalent of destroying a terminal, or maybe a typewriter; "the" computer is the network, not the appendages that are connected to the network.
Although McKnight's forecasts of man-machine war have not come to pass and seem unlikely to come true in the near future, there may be a time when a rivalry develops between humanity and its creations. The development of the personal computer and the Internet may have rendered much of his analysis irrelevant, but we have not yet had to deal with true "machine intelligence," with computers that can analyze, learn and even reflect.
British Telecom recently released its 2006 technology timeline, which predicts various steps in the evolution of technology from now until 2051. Among other things, the timeline predicts that an Artificial Intelligence entity will be elected to Parliament in 2020. I rather doubt we will greet milestones such as this by becoming Electronic Luddites who are hell-bent on the destruction of intelligent technologies; I certainly hope we do not take this path. But I suspect many people will find it difficult to accept machine life-forms. . . .
Monday, February 27, 2006
Low hanging fruit . . .
This is a follow-up to the post I did last week on why our current model of law enforcement is not very effective when it comes to cybercrime.
Most experienced cybercrime investigators will tell you that it is the inept cybercriminals -- the "low-hanging fruit" -- who are being caught. The clever cybercriminals are much less likely to be identified and apprehended, especially if they operate from certain countries.
This is not to critize either the efforts or the professionalism of cybercrime investigators. They are doing the best they can with a whole new ballgame; aside from anything else, our current model of law enforcement was not designed to deal with transantional crime, which is what cybercrime is increasingly becoming.
But back to low-hanging fruit: To illustrate my point about the ineptitude of the cybercriminals who are being caught, I want to use Myron Tereshchuk, whose story has been told in various places, including the New York Times. See Timothy L. O'Brien, The Rise of the Digital Thugs, NYT (August 7, 2005).
Tereshchuk operated a small patent document service that competed with MicroPatent, which describes itself as "the world's leading source for online patent and trademark information." It is, obviously, essential for those who provide these services to have access to the U.S. Patent and Trademark Office. Several years before he became a cybercriminal,Tereshchuk, however, was banned from the Patent Office for either of two reasons: Some say it was because a Patent Office employee accused him of threatening to bomb the office, while others say it was because he was accused of taking files from the office without permission. Either way, Tereshchuk came to blame MicroPatent for his troubles, and decided to take action.
In February, 2003, Tereshchuk used unsecured wireless networks around the D.C. area to send emails to Micropatent clients that ostensibly came from a disgruntled MicroPatent employee using the company's email system. The emails provided information about, and instructions, for a sex-toy patent held by one of the company's clients. This seems to have been purely an act of harassment, since Tereshchuk made no demands at this point. Later in 2003, he used the same tactic to send passwords and customer data to MicroPatent clients; again, his goal seems to have been harassment, only.
In January, 2004, Terschchuk sent a series of threatening emails to MicroPatent, using the alias "Bryan Ryan." In these emails, "Ryan" claimed to have confidential MicroPatent documents, along with customer data and computer passwords. "Ryan" warned MicroPatent's President than unless his demands were met (more on those in a minute), this information would "end up in e-mail boxes worldwide." "Ryan" included "samples" of confidential MicroPatent documents to prove that he had access to such material. He also threatened a Denial of Service attack, claiming that if MicroPatent did not meet his demands (in a minute), he would overload its servers with data and shut them down.
"Ryan" told MicroPatent it could avoid all this havoc by paying him $17,000,000. "Ryan" also told MicroPatent to send the $17,000,000 in three checks, each payable to "Myron Tereshchuk" and each to be sent to his parents' home in Maryland. The checks, of course, were not issued. Instead, the FBI arrested Tereshchuk, who eventually pled guilty to one count of attempting to use a computer to extort $17,000,000. Tereshchuk is currently serving the 63-month sentence he received under this plea.
Myron Tereshchuk is a classic example of low-hanging fruit: an incredibly inept cybecriminal, one who self-identified himself to the agents investigating his activities. Tereshchuk joins the low-hanging fruit Hall of Fame, along with Jeffrey Lee Parson, who released a modified version of the Blaster worm that had his website address in its code. It did not take very long for the FBI to find Parson, who was luck enough to turn 18 a few weeks before he released his version of the worm; this, of course, made him a viable target for federal prosecution.
Most of the low-hanging cybercrime fruit is just that: Inept criminals who are caught because they "out" themselves (like Tereshchuk and Parson), operate in the U.S., come to the U.S. after having been identified as cybercriminals or engage in other, equally-foolish maneuvers. Most of the headlines we see about cybercriminals' being apprehended deal with low-hanging fruit.
We must not, therefore, find much reassurance in those headlines . . . for there are many, many cybercriminals out there who are distinctly not low-hanging fruit. Those are the ones we have to worry about . . . .
Most experienced cybercrime investigators will tell you that it is the inept cybercriminals -- the "low-hanging fruit" -- who are being caught. The clever cybercriminals are much less likely to be identified and apprehended, especially if they operate from certain countries.
This is not to critize either the efforts or the professionalism of cybercrime investigators. They are doing the best they can with a whole new ballgame; aside from anything else, our current model of law enforcement was not designed to deal with transantional crime, which is what cybercrime is increasingly becoming.
But back to low-hanging fruit: To illustrate my point about the ineptitude of the cybercriminals who are being caught, I want to use Myron Tereshchuk, whose story has been told in various places, including the New York Times. See Timothy L. O'Brien, The Rise of the Digital Thugs, NYT (August 7, 2005).
Tereshchuk operated a small patent document service that competed with MicroPatent, which describes itself as "the world's leading source for online patent and trademark information." It is, obviously, essential for those who provide these services to have access to the U.S. Patent and Trademark Office. Several years before he became a cybercriminal,Tereshchuk, however, was banned from the Patent Office for either of two reasons: Some say it was because a Patent Office employee accused him of threatening to bomb the office, while others say it was because he was accused of taking files from the office without permission. Either way, Tereshchuk came to blame MicroPatent for his troubles, and decided to take action.
In February, 2003, Tereshchuk used unsecured wireless networks around the D.C. area to send emails to Micropatent clients that ostensibly came from a disgruntled MicroPatent employee using the company's email system. The emails provided information about, and instructions, for a sex-toy patent held by one of the company's clients. This seems to have been purely an act of harassment, since Tereshchuk made no demands at this point. Later in 2003, he used the same tactic to send passwords and customer data to MicroPatent clients; again, his goal seems to have been harassment, only.
In January, 2004, Terschchuk sent a series of threatening emails to MicroPatent, using the alias "Bryan Ryan." In these emails, "Ryan" claimed to have confidential MicroPatent documents, along with customer data and computer passwords. "Ryan" warned MicroPatent's President than unless his demands were met (more on those in a minute), this information would "end up in e-mail boxes worldwide." "Ryan" included "samples" of confidential MicroPatent documents to prove that he had access to such material. He also threatened a Denial of Service attack, claiming that if MicroPatent did not meet his demands (in a minute), he would overload its servers with data and shut them down.
"Ryan" told MicroPatent it could avoid all this havoc by paying him $17,000,000. "Ryan" also told MicroPatent to send the $17,000,000 in three checks, each payable to "Myron Tereshchuk" and each to be sent to his parents' home in Maryland. The checks, of course, were not issued. Instead, the FBI arrested Tereshchuk, who eventually pled guilty to one count of attempting to use a computer to extort $17,000,000. Tereshchuk is currently serving the 63-month sentence he received under this plea.
Myron Tereshchuk is a classic example of low-hanging fruit: an incredibly inept cybecriminal, one who self-identified himself to the agents investigating his activities. Tereshchuk joins the low-hanging fruit Hall of Fame, along with Jeffrey Lee Parson, who released a modified version of the Blaster worm that had his website address in its code. It did not take very long for the FBI to find Parson, who was luck enough to turn 18 a few weeks before he released his version of the worm; this, of course, made him a viable target for federal prosecution.
Most of the low-hanging cybercrime fruit is just that: Inept criminals who are caught because they "out" themselves (like Tereshchuk and Parson), operate in the U.S., come to the U.S. after having been identified as cybercriminals or engage in other, equally-foolish maneuvers. Most of the headlines we see about cybercriminals' being apprehended deal with low-hanging fruit.
We must not, therefore, find much reassurance in those headlines . . . for there are many, many cybercriminals out there who are distinctly not low-hanging fruit. Those are the ones we have to worry about . . . .
Saturday, February 25, 2006
Online vigilantes: where we may be going
When you think about it, Superman is a vigilante, along with most (all?) of the other superheroes. Like other superheroes, he helps law enforcement by getting "bad guys;" and like other superheroes, he does this on his own, having no official ties to law enforcement.
This distinguishes Superman and the superhero crowd from the kind of vigilante I talked about in my last post: vigilantes who have, historically, emerged when there was a law enforcement vacuum. Traditional vigilantes were a substitute for law enforcement, rather than an enhancement for an effective law enforcement presence.
Cyberspace creates a mixed environment: There is a law enforcement presence in cyberspace but, as I said in an earlier post, it cannot be as effective in controlling online crime as it is with regard to real-world crime. The already-perceived inefficacy of traditional law enforcement is giving rise to a new kind of vigilante: the superhero-as-adjunct-to-law-enforcement-vigilante.
I talked a little about that kind of vigilante in my last post, when I described how the Perverted Justice staff is now working increasingly closely with law enforcement. At least three federal courts of appeals have had occasion to consider how, if at all, these adjunct-to-law-enforcement vigilantes fit into our criminal procedure law. More precisely, the issue that came up in these cases was whether the Fourth Amendment prohibition on unreasonable searches and seizures applies to the activities of these vigilantes.
The first of these cases came from Alabama: In 2000, Captain Murphy of the Montgomery, Alabama Police Department received an email from unknownuser@hotmail.com. The email said that the author had found a child molester in Montgomery, that the writer knew the child molester's name (Brad Steiger), address, telephone number, "Internet account" and could "see when he was online." See United States v. Steiger, 318 F.3d 1029 (11th Circuit Ct. Appeals, 2003). Murphy responded, asking for more information and Unknownuser (as he/she came to be known) sent images showing Steiger and a young girl in varying states of dress and undress; Unknownuser also sent Steiger's checking account and "identified specific folders where pornographic pictures were stored on Steiger's computer." See United States v. Steiger.
Unknownuser, a self-described pedophile hunter, was able to do all of this because he/she had installed a Trojan horse program on Steiger's computer. According to Unknownuser, he/she had uploaded the program to a news group patronized by pedophiles and waited as it installed itself on various computers; in his email correspondence with Murphy, he/she claimed to have caught 2000 child pornographers with the Trojan horse. See United States v. Steiger. He/she refused to speak with Murphy by phone or in person; Unknownuser claimed to be a Turkish hacker "with a family". "He" also claimed "he" would jeopardize his family and job if "he" revealed "his" identity. See United States v. Steiger. Despite the fact that Unknownuser's activities have been the subject of several federal court decisions and one state appellate court decision, we still have no idea who he/she was or where he/she was located. As an officer said when I used this case in law enforcement training a year ago, "he could have been in Peoria, for all we know."
Murphy contacted the FBI; FBI agents used the information he got from Unknownuser to get a warrant to search Steiger's home and computer. As Unknownuser predicted, the agents found child pornography on Steiger's computer. He was indicted for possessing and creating child pornography, among other things. Steiger argued that the evidence aganst him should be suppressed because even though the police obtained his computer and the evidence on it by conducting a search pursuant to a lawful warrant, the search warrant was based on evidence obtained by Unknownuser's Trojan horse. Steiger claimed that Unknownuser, a civilian, had been acting as an agent of the police when he/she used the Trojan horse to search Steiger's computer. See United States v. Steiger.
The Fourth Amendment, and other constitutional protections, only apply when there is state action. So, if Unknownuser was acting as a private citizen (of whatever country), the Fourth Amendment was not implicated by his use of the Trojan horse; if, however, he had been acting as an agent of the police, then the Fourth Amendment would apply to his use of the program, and the evidence it elicited would have been obtained unconstitutionally. Steiger would win on his motion to suppress, which would effectively gut the case against him.
The district court denied the motion to suppress, and the Fifth Circuit Court of Appeals affirmed that decision. See United States v. Steiger, 318 F.3d 1029 (11th Circuit Ct. Appeals, 2003). A civilian becomes an agent of the police only if two conditions are met: (i) the person acted with the intent to help law enforcement; and (ii) the government knew of person's activities and either acquiesced in or encouraged them. See United States v. Steiger. Unknownuser's purpose was to benefit law enforcement, so the first requirement was met. The Fifth Circuit held, however, that Unknownuser was not acting as a state agent when he/she searched Steiger's computer because the government was completely unaware of what he was doing and therefore could not have either acquiesced in or encouraged his/her activity. This holding was clearly correct.
On December 3, 2001, Captain Murphy received an email from Unknownuser which said he/she had found another child molester named Jarrett who lived in Richmond, Virginia. See United States v. Jarrett, 229 F. Supp.2d 503 (E.D. Va. 2002), reversed 338 F.3d 339 (4th Circuit Ct. of Appeals 2003). Unknownuser asked Murphy to put him/her in touch with the FBI in Richmond, so they could pursue Jarrett. Murphy did, and over the next several months an FBI agent had an email correspondence with Unknownuser about the Jarrett investigation.
In her emails, this agent repeatedly assured Unknownuser that the U.S. government would not prosecute him for "hacking" because he/she was outside the U.S., so our laws would not apply to his/her activities. This is, first of all, not true; section 1030 of Title 18 of the U.S. Code, which is the basic federal cybercrime provision, makes hacking a crime and gives the U.S. jurisdiction to prosecute when hacking involves the use of a computer located outside the United States. There is also the uncertainty as to precisely where Unknownuser actually was; if he/she was, in fact, in Peoria, it would not have been necessary to invoke extraterritorial jurisdiction to prosecute him/her.
The agent also repeatedly told Unknownuser that while she could not ask him to "search out" cases like the Steiger and Jarrett cases (because then he would be "hacking" at the behest of the U.S. government), "if you should happen across such pictures as the ones you have sent to us and wish us to look into the matter, please feel free to send them to us." See United States v. Jarrett. She told him she "admired" him and repeatedly told him federal prosecutors had not desire to prosecute him for his activities in seeking out those involved with child pornography.
Jarrett was prosecuted for possessing child pornography based, again, on evidence derived from Unknownuser's Trojan horse, which had installed itself on Jarrett's computer. Jarrett, like Steiger, moved to suppress the evidence, making the same argument Steiger had. The district court granted the motion to suppress; in a lengthy opinion, it detailed the contacts between the FBI and Unknownuser and concluded that in the Jarrett investigation the FBI had encouraged Unknownuser's efforts, so he became an agent of the state. That being the case, the evidence he/she obtained from Jarrett's computer was elicited unconstitutionally, in violation of the Fourth Amendment. The Fourth Circuit Court of Appeals disagreed, reversing the ruling suppressing the evidence. Though the Fourth Circuit found that the FBI had "operated close to the line in this case," it ultimately held that the agent's communications with Unknownuser were not sufficient to transform him/her into a state agent under the standard given above. See United States v. Jarrett, 338 F.3d 339 (4th Circuit Ct. of Appeals 2003).
I think the Fourth Circuit erred in this respect, but my concern is not with Steiger or Jarrett. What I find interesting about these cases is that Unknownuser was acting as the new kind of vigilante I described above -- the adjunct-to-law-enforcement-vigilante.
When vigilantes substitute for law enforcement, it is relatively easy to find that their actions were "outside" the law, and therefore "criminal." When vigilantes "help out" law enforcement, the analysis becomes more difficult, as the Steiger and Jarrett cases demonstrate.
The Steiger and Jarrett cases are not the only ones to consider the legality of the activities of an adjunct-to-law-enforcement-vigilante; the same issue came up in a case that went to the Ninth Circuit Court of Appeals. In United States v. Kline, 112 Fed. Appx. 562 (2004), the Ninth Circuit reversed a district court's order suppressing evidence obtained by Brad Willman, a Canadian who used a Trojan horse program to find child pornography on a computer used by a judge in Orange County, California. The district court suppressed for essentially the same reasons as the district court in the Jarrett case; the Ninth Circuit reversed for essentially the same reasons the Fourth Circuit reversed in the Jarrett case.
There are only a few reported cases on this issue, so far, but my sense is that we will be seeing more and more of the adjunct-to-law-enforcement-vigilante. Our current law enforcement model is, as I argued in an earlier post, not particularly effective in dealing with online crime. It is not so ineffective that we have a law enforcement vacuum online, but law enforcement's effectiveness in controlling crime is seriously eroded online.
I do not think that is a transient state of affairs. Indeed, I think the challenge law enforcement currently faces in the online context will be exacerbated by the almost-exponentially increasing evolution of technologies. I believe, therefore, that we will need to develop new strategies for dealing with online crime, in whatever form it takes.
Unknownuser, Willman and the Perverted Justice staffers may represent the beginnings of a new approach to controlling crime online . . . one that emphasizes police-citizen cooperation instead of making crime control the exclusive province of a professional police force. If that does come to pass, we may have to reconsider the rules that govern civilian participation in the policing process.
This distinguishes Superman and the superhero crowd from the kind of vigilante I talked about in my last post: vigilantes who have, historically, emerged when there was a law enforcement vacuum. Traditional vigilantes were a substitute for law enforcement, rather than an enhancement for an effective law enforcement presence.
Cyberspace creates a mixed environment: There is a law enforcement presence in cyberspace but, as I said in an earlier post, it cannot be as effective in controlling online crime as it is with regard to real-world crime. The already-perceived inefficacy of traditional law enforcement is giving rise to a new kind of vigilante: the superhero-as-adjunct-to-law-enforcement-vigilante.
I talked a little about that kind of vigilante in my last post, when I described how the Perverted Justice staff is now working increasingly closely with law enforcement. At least three federal courts of appeals have had occasion to consider how, if at all, these adjunct-to-law-enforcement vigilantes fit into our criminal procedure law. More precisely, the issue that came up in these cases was whether the Fourth Amendment prohibition on unreasonable searches and seizures applies to the activities of these vigilantes.
The first of these cases came from Alabama: In 2000, Captain Murphy of the Montgomery, Alabama Police Department received an email from unknownuser@hotmail.com. The email said that the author had found a child molester in Montgomery, that the writer knew the child molester's name (Brad Steiger), address, telephone number, "Internet account" and could "see when he was online." See United States v. Steiger, 318 F.3d 1029 (11th Circuit Ct. Appeals, 2003). Murphy responded, asking for more information and Unknownuser (as he/she came to be known) sent images showing Steiger and a young girl in varying states of dress and undress; Unknownuser also sent Steiger's checking account and "identified specific folders where pornographic pictures were stored on Steiger's computer." See United States v. Steiger.
Unknownuser, a self-described pedophile hunter, was able to do all of this because he/she had installed a Trojan horse program on Steiger's computer. According to Unknownuser, he/she had uploaded the program to a news group patronized by pedophiles and waited as it installed itself on various computers; in his email correspondence with Murphy, he/she claimed to have caught 2000 child pornographers with the Trojan horse. See United States v. Steiger. He/she refused to speak with Murphy by phone or in person; Unknownuser claimed to be a Turkish hacker "with a family". "He" also claimed "he" would jeopardize his family and job if "he" revealed "his" identity. See United States v. Steiger. Despite the fact that Unknownuser's activities have been the subject of several federal court decisions and one state appellate court decision, we still have no idea who he/she was or where he/she was located. As an officer said when I used this case in law enforcement training a year ago, "he could have been in Peoria, for all we know."
Murphy contacted the FBI; FBI agents used the information he got from Unknownuser to get a warrant to search Steiger's home and computer. As Unknownuser predicted, the agents found child pornography on Steiger's computer. He was indicted for possessing and creating child pornography, among other things. Steiger argued that the evidence aganst him should be suppressed because even though the police obtained his computer and the evidence on it by conducting a search pursuant to a lawful warrant, the search warrant was based on evidence obtained by Unknownuser's Trojan horse. Steiger claimed that Unknownuser, a civilian, had been acting as an agent of the police when he/she used the Trojan horse to search Steiger's computer. See United States v. Steiger.
The Fourth Amendment, and other constitutional protections, only apply when there is state action. So, if Unknownuser was acting as a private citizen (of whatever country), the Fourth Amendment was not implicated by his use of the Trojan horse; if, however, he had been acting as an agent of the police, then the Fourth Amendment would apply to his use of the program, and the evidence it elicited would have been obtained unconstitutionally. Steiger would win on his motion to suppress, which would effectively gut the case against him.
The district court denied the motion to suppress, and the Fifth Circuit Court of Appeals affirmed that decision. See United States v. Steiger, 318 F.3d 1029 (11th Circuit Ct. Appeals, 2003). A civilian becomes an agent of the police only if two conditions are met: (i) the person acted with the intent to help law enforcement; and (ii) the government knew of person's activities and either acquiesced in or encouraged them. See United States v. Steiger. Unknownuser's purpose was to benefit law enforcement, so the first requirement was met. The Fifth Circuit held, however, that Unknownuser was not acting as a state agent when he/she searched Steiger's computer because the government was completely unaware of what he was doing and therefore could not have either acquiesced in or encouraged his/her activity. This holding was clearly correct.
On December 3, 2001, Captain Murphy received an email from Unknownuser which said he/she had found another child molester named Jarrett who lived in Richmond, Virginia. See United States v. Jarrett, 229 F. Supp.2d 503 (E.D. Va. 2002), reversed 338 F.3d 339 (4th Circuit Ct. of Appeals 2003). Unknownuser asked Murphy to put him/her in touch with the FBI in Richmond, so they could pursue Jarrett. Murphy did, and over the next several months an FBI agent had an email correspondence with Unknownuser about the Jarrett investigation.
In her emails, this agent repeatedly assured Unknownuser that the U.S. government would not prosecute him for "hacking" because he/she was outside the U.S., so our laws would not apply to his/her activities. This is, first of all, not true; section 1030 of Title 18 of the U.S. Code, which is the basic federal cybercrime provision, makes hacking a crime and gives the U.S. jurisdiction to prosecute when hacking involves the use of a computer located outside the United States. There is also the uncertainty as to precisely where Unknownuser actually was; if he/she was, in fact, in Peoria, it would not have been necessary to invoke extraterritorial jurisdiction to prosecute him/her.
The agent also repeatedly told Unknownuser that while she could not ask him to "search out" cases like the Steiger and Jarrett cases (because then he would be "hacking" at the behest of the U.S. government), "if you should happen across such pictures as the ones you have sent to us and wish us to look into the matter, please feel free to send them to us." See United States v. Jarrett. She told him she "admired" him and repeatedly told him federal prosecutors had not desire to prosecute him for his activities in seeking out those involved with child pornography.
Jarrett was prosecuted for possessing child pornography based, again, on evidence derived from Unknownuser's Trojan horse, which had installed itself on Jarrett's computer. Jarrett, like Steiger, moved to suppress the evidence, making the same argument Steiger had. The district court granted the motion to suppress; in a lengthy opinion, it detailed the contacts between the FBI and Unknownuser and concluded that in the Jarrett investigation the FBI had encouraged Unknownuser's efforts, so he became an agent of the state. That being the case, the evidence he/she obtained from Jarrett's computer was elicited unconstitutionally, in violation of the Fourth Amendment. The Fourth Circuit Court of Appeals disagreed, reversing the ruling suppressing the evidence. Though the Fourth Circuit found that the FBI had "operated close to the line in this case," it ultimately held that the agent's communications with Unknownuser were not sufficient to transform him/her into a state agent under the standard given above. See United States v. Jarrett, 338 F.3d 339 (4th Circuit Ct. of Appeals 2003).
I think the Fourth Circuit erred in this respect, but my concern is not with Steiger or Jarrett. What I find interesting about these cases is that Unknownuser was acting as the new kind of vigilante I described above -- the adjunct-to-law-enforcement-vigilante.
When vigilantes substitute for law enforcement, it is relatively easy to find that their actions were "outside" the law, and therefore "criminal." When vigilantes "help out" law enforcement, the analysis becomes more difficult, as the Steiger and Jarrett cases demonstrate.
The Steiger and Jarrett cases are not the only ones to consider the legality of the activities of an adjunct-to-law-enforcement-vigilante; the same issue came up in a case that went to the Ninth Circuit Court of Appeals. In United States v. Kline, 112 Fed. Appx. 562 (2004), the Ninth Circuit reversed a district court's order suppressing evidence obtained by Brad Willman, a Canadian who used a Trojan horse program to find child pornography on a computer used by a judge in Orange County, California. The district court suppressed for essentially the same reasons as the district court in the Jarrett case; the Ninth Circuit reversed for essentially the same reasons the Fourth Circuit reversed in the Jarrett case.
There are only a few reported cases on this issue, so far, but my sense is that we will be seeing more and more of the adjunct-to-law-enforcement-vigilante. Our current law enforcement model is, as I argued in an earlier post, not particularly effective in dealing with online crime. It is not so ineffective that we have a law enforcement vacuum online, but law enforcement's effectiveness in controlling crime is seriously eroded online.
I do not think that is a transient state of affairs. Indeed, I think the challenge law enforcement currently faces in the online context will be exacerbated by the almost-exponentially increasing evolution of technologies. I believe, therefore, that we will need to develop new strategies for dealing with online crime, in whatever form it takes.
Unknownuser, Willman and the Perverted Justice staffers may represent the beginnings of a new approach to controlling crime online . . . one that emphasizes police-citizen cooperation instead of making crime control the exclusive province of a professional police force. If that does come to pass, we may have to reconsider the rules that govern civilian participation in the policing process.
Tuesday, February 21, 2006
Online vigilantes: where we are
This image depicts the 1856 hanging of Charles Cora and James Casey, both suspected murderers, by the San Francisco Committee of Vigilance.
The United States has a long history of vigilantism, which is civilians' "taking the law into their own hands." Vigilantism emerges when "official" law enforcement is lacking or is perceived as being ineffectual. Americans are likely to associate vigilantism with the "Wild West," a place and an era in which law enforcement was often sorely lacking. While "vigilance committees" were organized around the country at various times, we generally associate vigilantes with the Wild West, and with scenarios like that depicted in the Ox-bow Incident.
With a few isolated exceptions, vigilantism had disappeared from American society by, say, the middle of the twentieth century. The last few years, however, have seen the emergence of a new kind of vigilante, in this country and elsewhere. This new kind of vigilante either operates totally online or uses cyberspace to orchestrate vigilante activity in the real-world. The Artists Against 419 are an example of the first kind of vigilante; the Artists Against 419 and similar groups utilize online tactics to harass and sabotage those who use to perpetrate Advance Fee Fraud, or 419, schemes. Perverted Justice and similar groups use online tactics to identify adults who seek to have sex with minors; their favorite tactic is having an adult pretend to be a minor participating in an online chat room. The adult who is pretending to be a minor goes online and participates in a chat room frequented by adults who want to have sex with children; the fake minor chats with these adults, and eventually arranges a meeting with one of them in the real, offline world. If and when the adult seeking sex with a minor shows up for the meeting, he or she may be arrested by police who have been informed of the meeting and/or may be "outed" on the Perverted Justice website.
The activities of both types of online vigilante raise an obvious question: Is this legal? Some of the tactics used by The Artists Against 419 are legally questionable, in that they may constitute a denial of service attack, which is a crime in the U.S. and in at least certain other countries. The activities of Perverted Justice and similar groups are more problematic, for at least two reasons.
One is that Perverted Justice has been criticized for, among other things, erroneously "outing" innocent people on their website. This could constitute defamation, which is civilly actionable and may also constitute a crime. Perverted Justice denies this and claims they have never been sued and their operations "line up nicely with the bill of rights." While I have my doubts about this, I tend to agree that their activities probably do not violate existing law. We do not make "vigilantism", as such, a crime; instead, we prosecute vigilates for the crimes they commit in the course of taking the law into their own hands. So we prosecute vigilantes for murder if they kill someone, for assault if they beat someone, and so on. If Perverted Justice "outs" someone who clearly did show up intending to have sex with a child, this would not be defamation, harassment or any other crime I can think of at the moment. If they incorrectly "outed" someone, this could be civilly actionable defamation, but the person might very well not want to file a lawsuit and bring more attention to the matter. Incorrectly "outing" someone might also be criminal defamation, but criminal defamation is not a crime in all states, tends to be a very minor offense in states that do make it a crime, and is generally not prosecuted. So, I suspect there is not much that could and would be done if Perverted Justice had erroneously "outed" an innocent person.
The other reason I see Perverted Justice and similar groups as problematic goes to the kind of activity Perverted Justice has been specializing in of late, as broadcast by NBC. I refer, of course, to Perverted Justice's running its online stings and luring adults to a location where they think they will have sex with a child, but where they actually encounter police who arrest them and reporters who memorialize the whole transaction. In my next post, I want to talk about the legal issues this kind of activity raises, and about what it might suggest about how vigilantism will evolve online.
The United States has a long history of vigilantism, which is civilians' "taking the law into their own hands." Vigilantism emerges when "official" law enforcement is lacking or is perceived as being ineffectual. Americans are likely to associate vigilantism with the "Wild West," a place and an era in which law enforcement was often sorely lacking. While "vigilance committees" were organized around the country at various times, we generally associate vigilantes with the Wild West, and with scenarios like that depicted in the Ox-bow Incident.
With a few isolated exceptions, vigilantism had disappeared from American society by, say, the middle of the twentieth century. The last few years, however, have seen the emergence of a new kind of vigilante, in this country and elsewhere. This new kind of vigilante either operates totally online or uses cyberspace to orchestrate vigilante activity in the real-world. The Artists Against 419 are an example of the first kind of vigilante; the Artists Against 419 and similar groups utilize online tactics to harass and sabotage those who use to perpetrate Advance Fee Fraud, or 419, schemes. Perverted Justice and similar groups use online tactics to identify adults who seek to have sex with minors; their favorite tactic is having an adult pretend to be a minor participating in an online chat room. The adult who is pretending to be a minor goes online and participates in a chat room frequented by adults who want to have sex with children; the fake minor chats with these adults, and eventually arranges a meeting with one of them in the real, offline world. If and when the adult seeking sex with a minor shows up for the meeting, he or she may be arrested by police who have been informed of the meeting and/or may be "outed" on the Perverted Justice website.
The activities of both types of online vigilante raise an obvious question: Is this legal? Some of the tactics used by The Artists Against 419 are legally questionable, in that they may constitute a denial of service attack, which is a crime in the U.S. and in at least certain other countries. The activities of Perverted Justice and similar groups are more problematic, for at least two reasons.
One is that Perverted Justice has been criticized for, among other things, erroneously "outing" innocent people on their website. This could constitute defamation, which is civilly actionable and may also constitute a crime. Perverted Justice denies this and claims they have never been sued and their operations "line up nicely with the bill of rights." While I have my doubts about this, I tend to agree that their activities probably do not violate existing law. We do not make "vigilantism", as such, a crime; instead, we prosecute vigilates for the crimes they commit in the course of taking the law into their own hands. So we prosecute vigilantes for murder if they kill someone, for assault if they beat someone, and so on. If Perverted Justice "outs" someone who clearly did show up intending to have sex with a child, this would not be defamation, harassment or any other crime I can think of at the moment. If they incorrectly "outed" someone, this could be civilly actionable defamation, but the person might very well not want to file a lawsuit and bring more attention to the matter. Incorrectly "outing" someone might also be criminal defamation, but criminal defamation is not a crime in all states, tends to be a very minor offense in states that do make it a crime, and is generally not prosecuted. So, I suspect there is not much that could and would be done if Perverted Justice had erroneously "outed" an innocent person.
The other reason I see Perverted Justice and similar groups as problematic goes to the kind of activity Perverted Justice has been specializing in of late, as broadcast by NBC. I refer, of course, to Perverted Justice's running its online stings and luring adults to a location where they think they will have sex with a child, but where they actually encounter police who arrest them and reporters who memorialize the whole transaction. In my next post, I want to talk about the legal issues this kind of activity raises, and about what it might suggest about how vigilantism will evolve online.
Cybercrime and law enforcement
For about the last century and a half, countries have used a particular model of law enforcement to keep crime at manageable levels within their territory. This model relies on a professional police force to keep crime at manageable levels primarily by reacting to completed crimes and apprehending the perpetrators, who are then tried, usually convicted and sanctioned for their misdeeds.
This model of law enforcement makes some effort to prevent crimes, mostly by interrupting crimes while they are in the planning or preparatory stage. But as we all know from the media, our primary crime control strategy is the reactive police model we have used since Sir Robert Peel invented modern policing in nineteenth century England.
As I have argued elsewhere, I do not think this model can be effective for cybercrime because the model is based on four empirical assumptions, none of which hold for cybercrime.
First, the model assumes physical proximity between perpetrator and victim at the time the crime is committed. Historically, it was not possible to defraud, rob or murder someone without being face-to-face with the victim; the necessity for physical proximity when a crime is committed gave rise to the focus on a physical "crime scene" in the investigation of an offense. With the rise of cyberspace, however, these and other crimes can be committed remotely; people in the U.S. can be defrauded by people physically located in Nigeria (and vice versa). And while we do not have, to the best of my knowledge, a documented instance of remote homicide via cyberspace, I am sure we will see this occur in the not-very-distant future.
Second, because the model assumes physical proximity between perpetrator and victim (and the real-time commission of the offense), it also assumes that crime occurs on a limited scale. In other words, it assumes serial crime: It assumes I defraud A, then move on to defraud B, then to C, and so on; it also assumes a level of preparation and other effort involved in my shift from victim to victim. These assumptions do not hold for online crime: Cyber-fraudsters can send out thousands and thousands and thousands of emails to potential victims, pursue those who "bite" and ultimately commit fraud on a scale that would be impossible in the real, physical world.
Third, the model assumes activity in the real-world that is subject to the physical constraints of the real-world. In the real-world, for example, if I want to rob a bank, and am halfway intelligent, I will have to expend time and effort to investigate the bank so I know when a reasonable amount of money will be there and how bank security operates. The first is to maximize the rewards, the second is to minimize the risk of apprehension. Keeping the second factor in mind, I will have to plan my entry into and exit from the bank, along with orchestrating the robbery once inside. I will have to figure out how to "launder" the proceeds so they do not arouse suspicion, while trying to prevent my becoming the victim of a diepack. All this makes the commission of the crime more difficult (in terms of avoiding apprehension) and more time-consuming. None of these physical constraints apply to unlawfully extracting money from a bank online; someone with the requisite computer skills and, perhaps, some inside information can transfer funds from accounts with relative ease, with little chance of being physically identified and apprehended while committing the crime or "fleeing" the scene.
Fourth, crime in the real-world falls into certain demographic and geographic patterns. As I have explained elsewhere, law enforcement experts developed crime-mapping techniques that let them identify the areas within a city where crimes of certain types of crime are likely to occur. This lets them concentrate their resources in a way that enhances the efficacy of their ability to respond to crimes when they occur. As I have also explained elsewhere, we cannot identify patterns in cybercrime because we lack the foundational data. We have no good statistics on cybercrime, primarily because it so often goes un-reported. The problem of under-reporting is exacerbated by the fact that agencies which keep crime statistics may not break offenses out into "crimes" and cybercrimes; real-world and online fraud may, for example, be lumped together in a single category.
Because of all this, I have argued elsewhere that we need to develop new strategies for dealing with cybercrime. This post is a preface: Tomorrow (or maybe the next day, depending on how tomorrow goes), I am going to do a post on the rise of vigilantism as a tactic for dealing with at least certain types of cybercrime.
This model of law enforcement makes some effort to prevent crimes, mostly by interrupting crimes while they are in the planning or preparatory stage. But as we all know from the media, our primary crime control strategy is the reactive police model we have used since Sir Robert Peel invented modern policing in nineteenth century England.
As I have argued elsewhere, I do not think this model can be effective for cybercrime because the model is based on four empirical assumptions, none of which hold for cybercrime.
First, the model assumes physical proximity between perpetrator and victim at the time the crime is committed. Historically, it was not possible to defraud, rob or murder someone without being face-to-face with the victim; the necessity for physical proximity when a crime is committed gave rise to the focus on a physical "crime scene" in the investigation of an offense. With the rise of cyberspace, however, these and other crimes can be committed remotely; people in the U.S. can be defrauded by people physically located in Nigeria (and vice versa). And while we do not have, to the best of my knowledge, a documented instance of remote homicide via cyberspace, I am sure we will see this occur in the not-very-distant future.
Second, because the model assumes physical proximity between perpetrator and victim (and the real-time commission of the offense), it also assumes that crime occurs on a limited scale. In other words, it assumes serial crime: It assumes I defraud A, then move on to defraud B, then to C, and so on; it also assumes a level of preparation and other effort involved in my shift from victim to victim. These assumptions do not hold for online crime: Cyber-fraudsters can send out thousands and thousands and thousands of emails to potential victims, pursue those who "bite" and ultimately commit fraud on a scale that would be impossible in the real, physical world.
Third, the model assumes activity in the real-world that is subject to the physical constraints of the real-world. In the real-world, for example, if I want to rob a bank, and am halfway intelligent, I will have to expend time and effort to investigate the bank so I know when a reasonable amount of money will be there and how bank security operates. The first is to maximize the rewards, the second is to minimize the risk of apprehension. Keeping the second factor in mind, I will have to plan my entry into and exit from the bank, along with orchestrating the robbery once inside. I will have to figure out how to "launder" the proceeds so they do not arouse suspicion, while trying to prevent my becoming the victim of a diepack. All this makes the commission of the crime more difficult (in terms of avoiding apprehension) and more time-consuming. None of these physical constraints apply to unlawfully extracting money from a bank online; someone with the requisite computer skills and, perhaps, some inside information can transfer funds from accounts with relative ease, with little chance of being physically identified and apprehended while committing the crime or "fleeing" the scene.
Fourth, crime in the real-world falls into certain demographic and geographic patterns. As I have explained elsewhere, law enforcement experts developed crime-mapping techniques that let them identify the areas within a city where crimes of certain types of crime are likely to occur. This lets them concentrate their resources in a way that enhances the efficacy of their ability to respond to crimes when they occur. As I have also explained elsewhere, we cannot identify patterns in cybercrime because we lack the foundational data. We have no good statistics on cybercrime, primarily because it so often goes un-reported. The problem of under-reporting is exacerbated by the fact that agencies which keep crime statistics may not break offenses out into "crimes" and cybercrimes; real-world and online fraud may, for example, be lumped together in a single category.
Because of all this, I have argued elsewhere that we need to develop new strategies for dealing with cybercrime. This post is a preface: Tomorrow (or maybe the next day, depending on how tomorrow goes), I am going to do a post on the rise of vigilantism as a tactic for dealing with at least certain types of cybercrime.
Sunday, February 19, 2006
Seizure
The Fourth Amendment to the U.S. Constitution bans unreasonable "searches" and "seizures." If the Fourth Amendment is to apply, therefore, there must be either a "search" or a "seizure." Today I want to talk about seizures.
In Soldal v. Cook County, 506 U.S. 56 (1992), the Soldals' mobile home was towed away without their permission. They appealed, claiming this was an improper "seizure" of their property, and the U.S. Supreme Court agreed. It noted that a "seizure" of property under the Fourth Amendment "occurs when `there is some meaningful interference with an individual's possessory interests in that property.'" The Court then found that physically tearing the Soldals' mobile home from its foundation and towing it to another lot was a seizure because these actions effectively divested the Soldals of their possessory interest in their mobile home.
Seizures in the real-world, like towing the Soldals' home, are relatively straightforward, as are seizures of computer hardware. If a police officer takes away my laptop computer, that is clearly a seizure under the standard quoted above. As long as the officer has the laptop, I do not.
As this example and the Soldals' sad tale may illustrate, real-world seizures are zero-sum events: The possession (and use) of property passes completely from one person or entity to another. This can be true in the virtual world: Data can be copied and then deleted from a computer or computer system, the effect being that possession of the data passes completely from the original owner to the person who has done the copying and deleting.
Copying data does not, however, have to be a zero-sum event. In State v. Schwartz, 173 Or. App. 301, 21 P.3d 1128 (Or. App. 2001), Randal Schwartz was prosecuted for computer theft based on his having copied a password file belonging to his employer, the Intel Corporation. Schwartz argued, essentially, that the charge was invalid because he had not "stolen" anything. He argued, quite credibly, that theft in the real-world is a zero-sum event, a circumstance that has historically been reflected in theft statutes.
Traditionally, theft has been defined as taking another's property with the intent to deprive the owner of its possession and use; the definition of theft is, therefore, analogous to the definition of Fourth Amendment seizures of property in that it, too, contemplates a zero-sum event. Schwartz argued that his copying the password file did not constitute a zer-sum event, that the state could not show he copied the file with the intent to completely deprive Intel of its possession and use.
But although the Oregon theft statute leaned toward the zero-sum conception of theft, the Oregon Court of Appeals rejected his argument, finding, essentially, that his act of copying the data had deprived Intel of something. The court found, basically, that Schwartz had diluted Intel's ability to preserve the confidentiality of the password data; since passwords have value "only so long as no one else knows what they are", Intel had "lost" something, even though it still had the actuall password data.
This brings me to the point of this post: It not settled whether law enforcement's copying data is a seizure under the Fourth Amendment. In United States v. Gorshkov, 2001 WL 1024026 (W.D. Wash. 2001), the district court summarily rejected the defendant's argument that FBI agents had violated his Fourth Amendment rights when they copied computer data belonging to him without first obtaining a warrant. The court indicated that copying data, which apparently does constitute theft, is not a "seizure" under the Fourth Amendment. This is the only reported case to address the issue.
I vehemently disagree with the Gorshkov court. I think that copying data is a seizure. I think it is a seizure for at least two reasons.
One is that, as the Schwartz case demonstrates, something definitely "happens" when data is copied; there is a transfer of some quantum of the value of the data from the original owner to the person who makes the copy. If copying data constitutes theft, then it should also constitute a seizure under the Fourth Amendment.
The other reason may seem ridiculously pragmatic, but I think it is important: If we do not define copying data as a seizure, then I do not think the process of copying data can be brought within the protections of the Fourth Amendment. (Copying data is not a search under the Fourth Amendment because it is possible to copy data without scrutinizing it; for there to be a search, there has to be some review of the contents of the data.)
If you would like to read more on this, you can consult the debate Orin Kerr and I had on this and other topics last summer.
In Soldal v. Cook County, 506 U.S. 56 (1992), the Soldals' mobile home was towed away without their permission. They appealed, claiming this was an improper "seizure" of their property, and the U.S. Supreme Court agreed. It noted that a "seizure" of property under the Fourth Amendment "occurs when `there is some meaningful interference with an individual's possessory interests in that property.'" The Court then found that physically tearing the Soldals' mobile home from its foundation and towing it to another lot was a seizure because these actions effectively divested the Soldals of their possessory interest in their mobile home.
Seizures in the real-world, like towing the Soldals' home, are relatively straightforward, as are seizures of computer hardware. If a police officer takes away my laptop computer, that is clearly a seizure under the standard quoted above. As long as the officer has the laptop, I do not.
As this example and the Soldals' sad tale may illustrate, real-world seizures are zero-sum events: The possession (and use) of property passes completely from one person or entity to another. This can be true in the virtual world: Data can be copied and then deleted from a computer or computer system, the effect being that possession of the data passes completely from the original owner to the person who has done the copying and deleting.
Copying data does not, however, have to be a zero-sum event. In State v. Schwartz, 173 Or. App. 301, 21 P.3d 1128 (Or. App. 2001), Randal Schwartz was prosecuted for computer theft based on his having copied a password file belonging to his employer, the Intel Corporation. Schwartz argued, essentially, that the charge was invalid because he had not "stolen" anything. He argued, quite credibly, that theft in the real-world is a zero-sum event, a circumstance that has historically been reflected in theft statutes.
Traditionally, theft has been defined as taking another's property with the intent to deprive the owner of its possession and use; the definition of theft is, therefore, analogous to the definition of Fourth Amendment seizures of property in that it, too, contemplates a zero-sum event. Schwartz argued that his copying the password file did not constitute a zer-sum event, that the state could not show he copied the file with the intent to completely deprive Intel of its possession and use.
But although the Oregon theft statute leaned toward the zero-sum conception of theft, the Oregon Court of Appeals rejected his argument, finding, essentially, that his act of copying the data had deprived Intel of something. The court found, basically, that Schwartz had diluted Intel's ability to preserve the confidentiality of the password data; since passwords have value "only so long as no one else knows what they are", Intel had "lost" something, even though it still had the actuall password data.
This brings me to the point of this post: It not settled whether law enforcement's copying data is a seizure under the Fourth Amendment. In United States v. Gorshkov, 2001 WL 1024026 (W.D. Wash. 2001), the district court summarily rejected the defendant's argument that FBI agents had violated his Fourth Amendment rights when they copied computer data belonging to him without first obtaining a warrant. The court indicated that copying data, which apparently does constitute theft, is not a "seizure" under the Fourth Amendment. This is the only reported case to address the issue.
I vehemently disagree with the Gorshkov court. I think that copying data is a seizure. I think it is a seizure for at least two reasons.
One is that, as the Schwartz case demonstrates, something definitely "happens" when data is copied; there is a transfer of some quantum of the value of the data from the original owner to the person who makes the copy. If copying data constitutes theft, then it should also constitute a seizure under the Fourth Amendment.
The other reason may seem ridiculously pragmatic, but I think it is important: If we do not define copying data as a seizure, then I do not think the process of copying data can be brought within the protections of the Fourth Amendment. (Copying data is not a search under the Fourth Amendment because it is possible to copy data without scrutinizing it; for there to be a search, there has to be some review of the contents of the data.)
If you would like to read more on this, you can consult the debate Orin Kerr and I had on this and other topics last summer.
Thursday, February 16, 2006
Laptops and border searches
For years and years and years (and years . . . ), Customs Officers in the US and elsewhere have inspected "luggage" as it came into a country. The purpose is to identify goods being imported (i) on which a duty is owed or (ii) that represent contraband or other items the importantion of which is not allowed.
In the US, therefore, Customs Officers have the right to open luggage and inspect its contents essentially as they wish. The first Congress gave them this authority back in 1789, and it has been renewed ever since.
Customs Officers' authority to routinely search luggage coming into the United States represents an exception to the usual requirements of the Fourth Amendment, which governs searches such as these. The Fourth Amendment's default position is that law enforcement agents (such as Customs Officers) must obtain a warrant based on probable cause and issued by a magistrate before they can search private property, such as suitcases and other types of "luggage." But there is a long-standing exception, known as the "border search exception," which lets Customs Officers search luggage without obtaining a warrant or having probable cause to believe they will find dutiable goods or contraband in a particular item of luggage. The Supreme Court has upheld this exception on several occasions, so its applicability to normal luggage is without question.
In the last few years, the question has arisen as to whether this exception also applies to a laptop computer being brought into the country (or taken out). Historically, the primary focus of the border search exception has been "luggage" (though it has been extended, on occasion, to other items, such as vehicles crossing territorial borders). We think of "luggage" as suitcases, duffel bags, briefcases, diaper bags, and similar containers for a traveler's belongings.
A laptop is certainly not the kind of container that comes to mind when we think of "luggage," but it is a kind of container. U.S. courts have analogized laptops to containters for the purposes of applying the Fourth Amendment's restrictions on government searches and seizures in other contexts. The premise is that a laptop is a "container" for intangible property -- for data.
Since it is becoming increasingly common for people to bring laptops into (and out of) the U.S., it is not surprising that the question has arisen as to whether Customs Officers can search laptops just as they would any other type of "luggage."
The answer, so far, is "yes."
A number of federal district courts and at least two federal circuit courts of appeals have held that the routine border search exception does apply to laptops. As the District Court for the Southern District of New York explained in United States v. Irving, 2003 WL 22127913 (2003),
"courts have compared personal notebook computers to closed containers for the purposes of the Fourth Amendment analysis. Inspection of the contents of closed containers comes within the scope of a routine border search and is permissible even in the absence of . . . probable cause. Indeed, `[t]he opening of luggage, itself a closed container, is the paradigmatic routine border search.’. . . [A]ny other decision effectively would allow individuals to render graphic contraband, such as child pornography, largely immune to border search simply by scanning images onto a computer disk before arriving at the border. "
This court and other federal courts have, therefore, applied the routine border search exception to uphold Customs Officers' searching through the files on a laptop's hard drive to determine if the laptop contains contraband such as child pornography.
And while these searches usually involve laptops that are coming into the United States, the Fifth Circuit Court of Appeals has held that the exception also applies to searches of laptops that are leaving the country. In United States v. Roberts, 274 F.3d 1007 (5th Cir. 2001), the Fifth Circuit upheld the search of a laptop and diskettes owned by a traveler who was preparing to leave the United States.
In the Roberts case, the district court upheld the search based, in part, on its conclusion that the routine border search exception applies with equal force to searches of property leaving the country. The district court also upheld the search under a "higher" standard, the non-routine border search exception, which lets Customs Officers search luggage (and laptops, apparently) when they have "reasonable suspicion" (a lower level probable cause) to believe contraband is in particular luggage. In the Roberts case, the Customs Officers had been informed that Roberts would be leaving the country with child pornography on his laptop and on diskettes he would be carrying with him. When the case got to the Fifth Circuit, they took the conservative approach and affirmed the district court based on the applicability of the non-routine search exception; they did not reach the issue of the applicability of the routine search exception. It is likely, however, that the Fifth Circuit would have upheld its applicability in this particular context.
These case and others, including one decided in January, 2006, make it clear that U.S. Customs Officers are searching laptops pursuant to the routine border seach exception. This means they can search a laptop without probable cause even reasonable suspicion to believe it contains contraband or other evidence of a crime. They can, instead, search a laptop in the same way and for the same reasons they elect to search suitcases or other "luggage."
On the one hand, applying the border search exception to laptops is eminently reasonable, as they are, in fact, a type of "container."
On the other hand, applying the exception to laptops seems . . . extreme, for lack of a better word. While laptops are certainly a type of "container," they are, I would argue, a unique type of container . . . one that holds information of a quantity and complexity that already vastly exceeds what could be crammed into a suitcase or any other kind of conventional luggage. And their capacity to store information of all types -- personal, business, whatever -- will only continue to increase, essentially exponentially.
There is also another issue that will arise, though it has apparently has not arisen so far. What happens if the laptop's hard drive is encrypted? Can the Customs Officers require the owner of the laptop to give them the encryption key needed to access the contents of the laptop? Can the owner refuse to do so . . . and abandon the trip?
If someone can claim that handing over the encryption key would be testimony that incriminates them in the commission of a crime, they might be able to invoke the Fifth Amendment and refuse to do so, in the U.S. anyway. In some countries, it is clear there is no right to refuse to provide an encyrption key when properly requested to do so by law enforcement officers.
In the US, therefore, Customs Officers have the right to open luggage and inspect its contents essentially as they wish. The first Congress gave them this authority back in 1789, and it has been renewed ever since.
Customs Officers' authority to routinely search luggage coming into the United States represents an exception to the usual requirements of the Fourth Amendment, which governs searches such as these. The Fourth Amendment's default position is that law enforcement agents (such as Customs Officers) must obtain a warrant based on probable cause and issued by a magistrate before they can search private property, such as suitcases and other types of "luggage." But there is a long-standing exception, known as the "border search exception," which lets Customs Officers search luggage without obtaining a warrant or having probable cause to believe they will find dutiable goods or contraband in a particular item of luggage. The Supreme Court has upheld this exception on several occasions, so its applicability to normal luggage is without question.
In the last few years, the question has arisen as to whether this exception also applies to a laptop computer being brought into the country (or taken out). Historically, the primary focus of the border search exception has been "luggage" (though it has been extended, on occasion, to other items, such as vehicles crossing territorial borders). We think of "luggage" as suitcases, duffel bags, briefcases, diaper bags, and similar containers for a traveler's belongings.
A laptop is certainly not the kind of container that comes to mind when we think of "luggage," but it is a kind of container. U.S. courts have analogized laptops to containters for the purposes of applying the Fourth Amendment's restrictions on government searches and seizures in other contexts. The premise is that a laptop is a "container" for intangible property -- for data.
Since it is becoming increasingly common for people to bring laptops into (and out of) the U.S., it is not surprising that the question has arisen as to whether Customs Officers can search laptops just as they would any other type of "luggage."
The answer, so far, is "yes."
A number of federal district courts and at least two federal circuit courts of appeals have held that the routine border search exception does apply to laptops. As the District Court for the Southern District of New York explained in United States v. Irving, 2003 WL 22127913 (2003),
"courts have compared personal notebook computers to closed containers for the purposes of the Fourth Amendment analysis. Inspection of the contents of closed containers comes within the scope of a routine border search and is permissible even in the absence of . . . probable cause. Indeed, `[t]he opening of luggage, itself a closed container, is the paradigmatic routine border search.’. . . [A]ny other decision effectively would allow individuals to render graphic contraband, such as child pornography, largely immune to border search simply by scanning images onto a computer disk before arriving at the border. "
This court and other federal courts have, therefore, applied the routine border search exception to uphold Customs Officers' searching through the files on a laptop's hard drive to determine if the laptop contains contraband such as child pornography.
And while these searches usually involve laptops that are coming into the United States, the Fifth Circuit Court of Appeals has held that the exception also applies to searches of laptops that are leaving the country. In United States v. Roberts, 274 F.3d 1007 (5th Cir. 2001), the Fifth Circuit upheld the search of a laptop and diskettes owned by a traveler who was preparing to leave the United States.
In the Roberts case, the district court upheld the search based, in part, on its conclusion that the routine border search exception applies with equal force to searches of property leaving the country. The district court also upheld the search under a "higher" standard, the non-routine border search exception, which lets Customs Officers search luggage (and laptops, apparently) when they have "reasonable suspicion" (a lower level probable cause) to believe contraband is in particular luggage. In the Roberts case, the Customs Officers had been informed that Roberts would be leaving the country with child pornography on his laptop and on diskettes he would be carrying with him. When the case got to the Fifth Circuit, they took the conservative approach and affirmed the district court based on the applicability of the non-routine search exception; they did not reach the issue of the applicability of the routine search exception. It is likely, however, that the Fifth Circuit would have upheld its applicability in this particular context.
These case and others, including one decided in January, 2006, make it clear that U.S. Customs Officers are searching laptops pursuant to the routine border seach exception. This means they can search a laptop without probable cause even reasonable suspicion to believe it contains contraband or other evidence of a crime. They can, instead, search a laptop in the same way and for the same reasons they elect to search suitcases or other "luggage."
On the one hand, applying the border search exception to laptops is eminently reasonable, as they are, in fact, a type of "container."
On the other hand, applying the exception to laptops seems . . . extreme, for lack of a better word. While laptops are certainly a type of "container," they are, I would argue, a unique type of container . . . one that holds information of a quantity and complexity that already vastly exceeds what could be crammed into a suitcase or any other kind of conventional luggage. And their capacity to store information of all types -- personal, business, whatever -- will only continue to increase, essentially exponentially.
There is also another issue that will arise, though it has apparently has not arisen so far. What happens if the laptop's hard drive is encrypted? Can the Customs Officers require the owner of the laptop to give them the encryption key needed to access the contents of the laptop? Can the owner refuse to do so . . . and abandon the trip?
If someone can claim that handing over the encryption key would be testimony that incriminates them in the commission of a crime, they might be able to invoke the Fifth Amendment and refuse to do so, in the U.S. anyway. In some countries, it is clear there is no right to refuse to provide an encyrption key when properly requested to do so by law enforcement officers.
Sunday, February 12, 2006
"Access"
18 U.S. Code section 1030 is the basic federal computer crime statute. Section 1030 makes it a federal crime to use a computer to commit fraud or extortion or to disseminate viruses and other types of malware. Like most state computer crime statutes, it also outlaws gaining "access" to a computer without authorization.
The statute assumes unauthorized access is of two types: (i) access by an outsider who has not been given permission to communicate with a particular computer; and (ii) access by an insider who has been given permission to communicate with a computer at a specific level, but who goes beyond the scope of that authorization. The first alternative is usually known as "unauthorized access," while the second is called "exceeding authorized access." See 18 U.S. Code section 1030.
The statute defines "computer" and other relevant terms, but it does not define "access," which seems a peculiarly basic omission. It is also surprising to learn that there is relatively little case law on the definition of "access" in this context.
The case that is usually cited on this issue is State v. Allen, 260 Kan. 107, 917 P.2d 848 (Kan. 1996). Allen was charged, essentially, with gaining "access" to Southwestern Bell's computers without authorization. The State's evidence showed that, in this era of dial-up connections, Allen had been wardialing, i.e., had used his computer to repeatedly call Southwestern Bell modems that could let a caller "enter" the Southwestern Bell computer system. The evidence also showed that if a call went through, the computer determined if it was answered by a modem or by a person, after which it terminated the connection.
The issue that went to the Kansas Supreme Court was whether Allen had "accessed" the Southwestern Bell computers; if he had, then the access would have been without authorization and the crime would have been committed. But if he had not accessed the computer, then the charged crime had not been committed.
The Kansas statute (like some state statutes in effect today) defined "access" as "to approach, instruct, communicate with, store data in, retrieve data from, or otherwise make use of" a computer. Kansas Statute Annotated section 21-3755. The state argued that, at a minimum, Allen had "approached" the Southwestern Bell computers, but the Kansas Supreme court disagreed. It agreed with a U.S. Department of Justice report which concluded that this use of "access" was unconstitutionally vague because it did not provide sufficient notice of what is forbidden; as the DOJ report pointed out, this interpretation of "access" would criminalze mere physical proximity to a computer.
The Kansas Supreme Court held that the evidence did not support the State's contention that Allen had "accessed" the Southwestern Bell computer because there was no evidence that he had "made use" of them or had been in a position to do so. It therefore upheld the lower court's dismissal of the charge against Allen.
The holding in Allen, which is still valid precedent on the issue of "accessing" a computer, would suggest that port-scanning, the process of searching a network for open ports that can be used to "access" the network, is not a crime. Surprisingly, perhaps, we have no criminal cases on this precise issue. The only decision in U.S. law on whether port-scanning is a violation of statutes like 18 U.S. Code section 1030 is a civil case. (Section 1030 also creates a civil cause of action for one whose computer has been attacked in violation of the statute.). In Moulton v. VC3, 2000 WL 33310901 (N.D. Ga. 2000), the court held that port-scanning was not a violation of section 1030.
Another way to attack this issue is to charge that a defendant -- like Allen -- is attempting to gain access to a computer or computer system without being authorized to do so. Section 1030(b) makes it a crime to attempt to commit any of the intrusions outlawed by section 1030(a), and most state computer crime statutes do something similar.
One wonders why the Allen prosecutor did not try this approach.
The statute assumes unauthorized access is of two types: (i) access by an outsider who has not been given permission to communicate with a particular computer; and (ii) access by an insider who has been given permission to communicate with a computer at a specific level, but who goes beyond the scope of that authorization. The first alternative is usually known as "unauthorized access," while the second is called "exceeding authorized access." See 18 U.S. Code section 1030.
The statute defines "computer" and other relevant terms, but it does not define "access," which seems a peculiarly basic omission. It is also surprising to learn that there is relatively little case law on the definition of "access" in this context.
The case that is usually cited on this issue is State v. Allen, 260 Kan. 107, 917 P.2d 848 (Kan. 1996). Allen was charged, essentially, with gaining "access" to Southwestern Bell's computers without authorization. The State's evidence showed that, in this era of dial-up connections, Allen had been wardialing, i.e., had used his computer to repeatedly call Southwestern Bell modems that could let a caller "enter" the Southwestern Bell computer system. The evidence also showed that if a call went through, the computer determined if it was answered by a modem or by a person, after which it terminated the connection.
The issue that went to the Kansas Supreme Court was whether Allen had "accessed" the Southwestern Bell computers; if he had, then the access would have been without authorization and the crime would have been committed. But if he had not accessed the computer, then the charged crime had not been committed.
The Kansas statute (like some state statutes in effect today) defined "access" as "to approach, instruct, communicate with, store data in, retrieve data from, or otherwise make use of" a computer. Kansas Statute Annotated section 21-3755. The state argued that, at a minimum, Allen had "approached" the Southwestern Bell computers, but the Kansas Supreme court disagreed. It agreed with a U.S. Department of Justice report which concluded that this use of "access" was unconstitutionally vague because it did not provide sufficient notice of what is forbidden; as the DOJ report pointed out, this interpretation of "access" would criminalze mere physical proximity to a computer.
The Kansas Supreme Court held that the evidence did not support the State's contention that Allen had "accessed" the Southwestern Bell computer because there was no evidence that he had "made use" of them or had been in a position to do so. It therefore upheld the lower court's dismissal of the charge against Allen.
The holding in Allen, which is still valid precedent on the issue of "accessing" a computer, would suggest that port-scanning, the process of searching a network for open ports that can be used to "access" the network, is not a crime. Surprisingly, perhaps, we have no criminal cases on this precise issue. The only decision in U.S. law on whether port-scanning is a violation of statutes like 18 U.S. Code section 1030 is a civil case. (Section 1030 also creates a civil cause of action for one whose computer has been attacked in violation of the statute.). In Moulton v. VC3, 2000 WL 33310901 (N.D. Ga. 2000), the court held that port-scanning was not a violation of section 1030.
Another way to attack this issue is to charge that a defendant -- like Allen -- is attempting to gain access to a computer or computer system without being authorized to do so. Section 1030(b) makes it a crime to attempt to commit any of the intrusions outlawed by section 1030(a), and most state computer crime statutes do something similar.
One wonders why the Allen prosecutor did not try this approach.
Cartapping
Almost forty years ago, in Katz v. U.S., 389 U.S. 347 (1967), the U.S. Supreme Court held that it was a violation of the Fourth Amendment's ban on "unreasonable" searches and seizures for law enforcement officers to wiretap a telephone call.
In Katz, the FBI put a wiretap device on the outside of a phone booth, knowing Charles Katz, a suspected bookie, would use the booth to make calls concerning illegal bets. Until 1967, it was not considered a "search" for officers to do this; the U.S. Supreme Court had held in 1928 that the FBI did not violate the Fourth Amendment by using a wiretap on phone lines outside a home to eavesdrop on a telephone call being made from inside the home. The result of the Court's 1928 decision was that officers did not need a warrant to wiretap telephone conversations.
The Katz Court rejected this notion, holding that it is a search to wiretap a telephone call when the callers have taken steps to ensure their conversation is "private." The Court found that Katz had done this when he entered the phone booth and closed the door behind him. So, the rule that comes from the Katz case is that it is a "search" for law enforcement officers to violate a "reasonable expectation of privacy" by using technology or by more traditional means, such as kicking down the door to someone's apartment and entering to "look around." A "reasonable expectation of privacy" exists when (i) I think something (a place, an activity) is private and (ii) society agrees that it is, in fact, private. The Katz Court noted, however, that whatever one "knowingly" exposes to public view, "even in his own home or office," is "not a subject of Fourth Amendment protection.
This brings us to something new: cartapping.
In 2003, the U.S. Court of Appeals for the Ninth Circuit issued a decision, In the Matter of the Application of the United States for an Order Authorizing the Roving Interception of Oral Communications, in case # 02-15635, which was an appeal from the U.S. District Court for the District of Nevada. The issue in the case was whether the FBI (again) could obtain a court order compelling a car manufacturer to use technology installed in one of its automobiles to let FBI agents eavesdrop on conversations in the car.
The opinion carefully does not identify the car manufacturer, though it cites BMW and Cadillac as cars that have such technology installed in them. The technology -- which the opinion calls "the System" -- is becoming increasingly common: on-board telecommunications systems that "assist drivers in activities from the mundane -- such as navigating an unfamiliar neighborhood or finding a nearby Chinese restaurant -- to the more vital -- such as responding to emergencies or obtaining roadside assistance." In the Matter of the Application, supra. As the opinion notes, these systems rely on a combination of Global Positioning System technology and cellular phone connections.
The System installed in the vehicle at issue in this case allowed the manufacturer to open a cellular connection and listen in on conversations held in the car. (According to the opinion, the purpose of this feature of the System was to let the manufacturer assist police in locating stolen vehicles; the rather peculiar assumption seems to have been that vehicles would be stolen by two or more thieves, who would discuss the theft and/or their whereabouts as they fled the scene of their crime.)
Anyway, the FBI figured out that the System could be used to eavesdrop on conversations being held in a vehicle owned by what we might call "a person of interest." FBI agents got the district court to order the manufacturer to cooperate by opening the cellular connection and letting FBI agents use it to overhear what was said in the vehicle. The manufacturer complied with the first order, challenged subsequent orders unsuccessfully in the district court and so complied with them. The manufacturer appealed the district court's rulings, and the Ninth Circuit sided with the company.
The Ninth Circuit based its holding on a technical issue that arises under the federal statutory scheme that implements the Katz decision. Known as Title III, this statutory scheme requires law enforcement officers to obtain a Title III warrant before they intercept telephone or other communications. The statutory scheme establishes a Fourth Amendment-plus standard for obtaining such orders. This statutory scheme allows courts to order private parties -- such as a telephone company -- to assist law enforcement in intercepting calls and other communications, but it specifically provides that such assistance cannot be required when it would substantially interfere with the private entity's ability to provide the services it has contracted for. The Ninth Circuit found that was true here; when the cellular connection was open, it essentially shut down the System's other functions.
That is not what I find interesting about the case. What I find interesting is whether, given the increasing proliferation of systems like this, we still have a reasonable expectation of privacy in our cars.
For many decades, in real-life and in cinema, the car was the place people went to when they wanted to talk without fear of being overheard. And until recently, anyway, people would have had a reasonable expectation of privacy, under Katz, in what they said in their cars. People believed the interior of their car was, like a telephone booth, a private place for conversations; others could see inside, but they could not hear what was said inside, at least not if the windows were closed and the people inside spoke softly. And since everyone believed this, it was an expectation society regarded as reasonable.
But what about now? Remember, the Katz Court said the Fourth Amendment does not protect what we "knowingly" expose to public view or public hearing. This is known as the assumption of risk principle: If I do not take steps to prevent my conversations from being overheard, then I have assumed the risk they will be overheard and I have no Fourth Amendment expectation of privacy in them.
If I buy a car, knowing it has a version of the System installed in it, and knowing that the System can be used to listen in on what is said in the car, haven't I assumed the risk that someone will listen in? If so, I have lost any expectation of privacy in the car under Katz.
I threw this issue out in my cybercrimes seminar last week. One student's husband has the Sytem in his car; she said that the operators often open up a connection essentially to "check in" with the occupants, asking them if they are "all right," for example. Based on this, she says she thinks the car is "about as private as a park bench."
Another student, whose car also has a version of the System but who does not have operators checking in to see how she is doing, says she believes the presence of the System does not alter our Fourth Amendment expectation of privacy in vehicles at all. She bases her view on the premise that she has to initiate contact with the operators of the System in her vehicle (and is charged every time she does so). She therefore concludes that it would be an illegitimate use of the System for those who monitor the technology in her vehicle to listen in on conversations held in the car.
I have no conclusion on this one, just thoughts.
You can find the opinions cited above in two places: (i) both are on Findlaw and (ii) the Ninth Circuit's opinion is on the Ninth Circuit's site.
In Katz, the FBI put a wiretap device on the outside of a phone booth, knowing Charles Katz, a suspected bookie, would use the booth to make calls concerning illegal bets. Until 1967, it was not considered a "search" for officers to do this; the U.S. Supreme Court had held in 1928 that the FBI did not violate the Fourth Amendment by using a wiretap on phone lines outside a home to eavesdrop on a telephone call being made from inside the home. The result of the Court's 1928 decision was that officers did not need a warrant to wiretap telephone conversations.
The Katz Court rejected this notion, holding that it is a search to wiretap a telephone call when the callers have taken steps to ensure their conversation is "private." The Court found that Katz had done this when he entered the phone booth and closed the door behind him. So, the rule that comes from the Katz case is that it is a "search" for law enforcement officers to violate a "reasonable expectation of privacy" by using technology or by more traditional means, such as kicking down the door to someone's apartment and entering to "look around." A "reasonable expectation of privacy" exists when (i) I think something (a place, an activity) is private and (ii) society agrees that it is, in fact, private. The Katz Court noted, however, that whatever one "knowingly" exposes to public view, "even in his own home or office," is "not a subject of Fourth Amendment protection.
This brings us to something new: cartapping.
In 2003, the U.S. Court of Appeals for the Ninth Circuit issued a decision, In the Matter of the Application of the United States for an Order Authorizing the Roving Interception of Oral Communications, in case # 02-15635, which was an appeal from the U.S. District Court for the District of Nevada. The issue in the case was whether the FBI (again) could obtain a court order compelling a car manufacturer to use technology installed in one of its automobiles to let FBI agents eavesdrop on conversations in the car.
The opinion carefully does not identify the car manufacturer, though it cites BMW and Cadillac as cars that have such technology installed in them. The technology -- which the opinion calls "the System" -- is becoming increasingly common: on-board telecommunications systems that "assist drivers in activities from the mundane -- such as navigating an unfamiliar neighborhood or finding a nearby Chinese restaurant -- to the more vital -- such as responding to emergencies or obtaining roadside assistance." In the Matter of the Application, supra. As the opinion notes, these systems rely on a combination of Global Positioning System technology and cellular phone connections.
The System installed in the vehicle at issue in this case allowed the manufacturer to open a cellular connection and listen in on conversations held in the car. (According to the opinion, the purpose of this feature of the System was to let the manufacturer assist police in locating stolen vehicles; the rather peculiar assumption seems to have been that vehicles would be stolen by two or more thieves, who would discuss the theft and/or their whereabouts as they fled the scene of their crime.)
Anyway, the FBI figured out that the System could be used to eavesdrop on conversations being held in a vehicle owned by what we might call "a person of interest." FBI agents got the district court to order the manufacturer to cooperate by opening the cellular connection and letting FBI agents use it to overhear what was said in the vehicle. The manufacturer complied with the first order, challenged subsequent orders unsuccessfully in the district court and so complied with them. The manufacturer appealed the district court's rulings, and the Ninth Circuit sided with the company.
The Ninth Circuit based its holding on a technical issue that arises under the federal statutory scheme that implements the Katz decision. Known as Title III, this statutory scheme requires law enforcement officers to obtain a Title III warrant before they intercept telephone or other communications. The statutory scheme establishes a Fourth Amendment-plus standard for obtaining such orders. This statutory scheme allows courts to order private parties -- such as a telephone company -- to assist law enforcement in intercepting calls and other communications, but it specifically provides that such assistance cannot be required when it would substantially interfere with the private entity's ability to provide the services it has contracted for. The Ninth Circuit found that was true here; when the cellular connection was open, it essentially shut down the System's other functions.
That is not what I find interesting about the case. What I find interesting is whether, given the increasing proliferation of systems like this, we still have a reasonable expectation of privacy in our cars.
For many decades, in real-life and in cinema, the car was the place people went to when they wanted to talk without fear of being overheard. And until recently, anyway, people would have had a reasonable expectation of privacy, under Katz, in what they said in their cars. People believed the interior of their car was, like a telephone booth, a private place for conversations; others could see inside, but they could not hear what was said inside, at least not if the windows were closed and the people inside spoke softly. And since everyone believed this, it was an expectation society regarded as reasonable.
But what about now? Remember, the Katz Court said the Fourth Amendment does not protect what we "knowingly" expose to public view or public hearing. This is known as the assumption of risk principle: If I do not take steps to prevent my conversations from being overheard, then I have assumed the risk they will be overheard and I have no Fourth Amendment expectation of privacy in them.
If I buy a car, knowing it has a version of the System installed in it, and knowing that the System can be used to listen in on what is said in the car, haven't I assumed the risk that someone will listen in? If so, I have lost any expectation of privacy in the car under Katz.
I threw this issue out in my cybercrimes seminar last week. One student's husband has the Sytem in his car; she said that the operators often open up a connection essentially to "check in" with the occupants, asking them if they are "all right," for example. Based on this, she says she thinks the car is "about as private as a park bench."
Another student, whose car also has a version of the System but who does not have operators checking in to see how she is doing, says she believes the presence of the System does not alter our Fourth Amendment expectation of privacy in vehicles at all. She bases her view on the premise that she has to initiate contact with the operators of the System in her vehicle (and is charged every time she does so). She therefore concludes that it would be an illegitimate use of the System for those who monitor the technology in her vehicle to listen in on conversations held in the car.
I have no conclusion on this one, just thoughts.
You can find the opinions cited above in two places: (i) both are on Findlaw and (ii) the Ninth Circuit's opinion is on the Ninth Circuit's site.
Tuesday, February 07, 2006
"Annoy"
Recently, Congress passed and the President signed the Violence Against Women and Department of Justice Reauthorization Act (H.R. 3402). One provision of the Act has gained a fair amount of media attention.
Section 113 of H.R. 3402 amended 47 U.S. Code section 223, which has for years made it a federal crime to use a telephone or some other type of telecommunications to make obscene or harassing telephone calls. Section 223(a)(1)(C) makes it a federal crime to make a phone call or utilize "a telecommunications device, whether or not conversation or communication ensues, without disclosing his identity and with intent to annoy, abuse, threaten, or harass any person at the called number or who receives the communications".
Until this recent amendment, section 223(h) defined "telecommunications device" in a way that excluded email and other online communications. Specifically, section 223(h)(2) said that the definition of "telecommunications device" used in this statute did "not include an interactive computer service". This is where the amendment comes in: Section 113 of H.R. 3402 modified this definition so it now includes online communications. To be precise, section 113 added this definitional section to 223(h): "in the case of subparagraph (C) of subsection (a)(1), 'the definition of "telecommunications device"] includes any device or software that can be used to originate telecommunications or other types of communications that are transmitted, in whole or in part, by the Internet".
Many have expressed concern about the effect of this amendment, since it essentially means one commits a federal crime if she uses the Internet anonymously to (i) abuse (ii) threaten (iii) harass or (iv) annoy someone else. The concern appears to lie not with categories (i)-(iii) but with category (iv) -- with the notion that I commit a federal crime if I send an anonymous email or transmit some other anonymous communication with the purpose of annoying someone. (The language of the statute makes it appear that the crime is committed whether or not annoyance actually results . . . . )
So, as I see it, this amendment creates two issues: (1) Why is "annoy" in there at all? (2) What is the effect of including it -- can someone really be convicted for simply being "annoying" online?
Let's start with (1). Section 223 was added to the federal code by the Communications Act of 1934. This Act was Congress' first real venture into regulating wire and radio communications in the United States; it did not include any version of section 223. Section 223 was added in 1968, by Public Law. No. 90-299, 82 Stat. 112. The purpose was to address what Congress saw as an evolving evil:
The 1968 legislation, therefore, specifically targeted misuse of the telephone. The language that now appears in section 223(a)(1)(C) was in this original version of section 223, though it then read as follows: "[Whoever] makes a telephone call, whether or not conversation ensues, without disclosing his identity and with intent to annoy, abuse, threaten, or harass any person at the called number" commits a federal crime. This language migrated to a new section 223(a) in 1983, when Congress amended the statute to add language criminalizing the use of a telephone to make an "obscene or indecent communication for commercial purposes to any person under eighteen years of age or to any other person without that person's consent". This expansion of the statute was prompted by public concern about children's accessing dial-a-porn services. See John C. Cleary, Telephone Pornography: First Amendment Constraints on Shielding Children from Dial-a-porn, 22 Harvard Journal on Legislation 503 (1985).
The statute was amended again in 1996, by the Communications Decency Act, Public Law No. 104-104 sec. 502, 110 Stat. 56 (1996). The CDA made several changes in section 223, one of which was to expand it to encompass the use of a "telecommunications device," as well as a telephone. It also added provisions criminalizing the use of a telephone or telecommunications device to transmit an obscene or "indecent" communication to a minor.
So, the answer to question (1) is that "annoy" is an artifact that dates back to the original, 1968 version of the statute, which was intended to criminalize threatening and generally harassing phone calls. The legislative history for that provision does not parse the various terms used in the original section 223 -- annoy, abuse, threaten, harass -- into separate criminal acts. Instead, it seems to combine all four into a single "evil" at which the statute is directed. See H.R. Report 90-1109, 1968 U.S.C.C.A.N. 1915, 1916 (1968). Congress solicited the Department of Justice's views on the provision then-Attorney General Warren Christopher replied. In his letter, he refers to the terms as a "descriptive series" and seems to imply that the "series" denotes harassment.
This, finally, brings us to question #2: What is the effect of including "annoy" in section 223? Is it intended to be a severable provision and a distinct offense?
The history lesson provided above indicates this was not Congress' intent. Congress meant to criminalize using a telephone to harass others, and some versions of what ultimately became section 223 only referred to using a phone to "harass" someone. See H.R. Report 90-1109, 1968 U.S.C.C.A.N. 1915, 1916 (1968). It is therefore logical to construe the four words (annoy, abuse, threaten and harass) included in the final version as a single collective term denoting the act of harassing someone.
Any other result would almost certainly violate the First Amendment. It is a basic principle of constitutional law that a criminal statute must "provide the kind of notice that will enable ordinary people to understand what conduct it prohibits". City of Chicago v. Morales, 527 U.S. 41, 56 (Supreme Court 1999). The Supreme Court has held that a criminal statute violates due process if it "is so vague and standardless" it " leaves the public uncertain as to the conduct it prohibits". Giaccio v. Pennsylvania, 382 U.S. 399, 402-403 (Supreme Court 1966).
In Coates v. City of Cincinnati, 402 U.S. 611 (1971), the Supreme Court held that an ordinance that made it unlawful for three or more people to assemble and conduct themselves in an "annoying" manner was unconstitutionally vague and therefore could not be enforced. The Court explained that conduct which "annoys some people does not annoy others. Thus, the ordinance is vague, not in the sense that it requires a person to conform his conduct to an imprecise but comprehensible normative standard, but rather in the sense that no standard of conduct is specified at all." 402 U.S. at 614.
The U.S. Court of Appeals for the Sixth Circuit reached a similar conclusion in 2004, as to the provisions of section 213. Erik Bowker was convicted of violating section 213 as it existed prior to the recent amendment, and appealed, arguing that the statute was unconstitutionally vague. United States v. Bowker, 372 F.3d 365 (6th Cir. 2004). Bowkder cited the Coates decision for the proposition that a statute which outlaws "annoying" behavior is so vague as to violate due process. The Sixth Circuit agreed that "`annoy', standing alone" can raise due process issues. It upheld section 213, however, because it found that "the words annoy, abuse, threaten or harass should be read together to be given similar meanings. Any vagueness associated with the word `annoy' is mitigated by the fact that the meanings of `threaten' and `harass' can easily be ascertained and have generally accepted meanings." 372 F.3d at 383.
So, where does all this leave us?
It means, that if a federal prosecutor were to bring charges based solely on the claim someone used Internet transmissions, anonymously, for the single purpose of "annoying" another person, the charge would almost certainly be thrown out as being unconstitutionally vague. (it could also be attacked as being contrary to the legislative history of the provision, for the reasons set out above).
It very likely means that, as a practical matter, charges under the statute will be brought only when the offender's conduct rises far beyond the level of "annoying" someone (which was true in the Bowker case).
It means that the language of section 213 is facially overbroad, but courts can still uphold the constitutionality of the statute by using the Sixth Circuit's approach, outlined above.
It does not mean there is not cause for aggravation, maybe even some level of concern. But it almost certainly does not mean that we have to fear a rash of "annoy" prosecutions in the near future.
Section 113 of H.R. 3402 amended 47 U.S. Code section 223, which has for years made it a federal crime to use a telephone or some other type of telecommunications to make obscene or harassing telephone calls. Section 223(a)(1)(C) makes it a federal crime to make a phone call or utilize "a telecommunications device, whether or not conversation or communication ensues, without disclosing his identity and with intent to annoy, abuse, threaten, or harass any person at the called number or who receives the communications".
Until this recent amendment, section 223(h) defined "telecommunications device" in a way that excluded email and other online communications. Specifically, section 223(h)(2) said that the definition of "telecommunications device" used in this statute did "not include an interactive computer service". This is where the amendment comes in: Section 113 of H.R. 3402 modified this definition so it now includes online communications. To be precise, section 113 added this definitional section to 223(h): "in the case of subparagraph (C) of subsection (a)(1), 'the definition of "telecommunications device"] includes any device or software that can be used to originate telecommunications or other types of communications that are transmitted, in whole or in part, by the Internet".
Many have expressed concern about the effect of this amendment, since it essentially means one commits a federal crime if she uses the Internet anonymously to (i) abuse (ii) threaten (iii) harass or (iv) annoy someone else. The concern appears to lie not with categories (i)-(iii) but with category (iv) -- with the notion that I commit a federal crime if I send an anonymous email or transmit some other anonymous communication with the purpose of annoying someone. (The language of the statute makes it appear that the crime is committed whether or not annoyance actually results . . . . )
So, as I see it, this amendment creates two issues: (1) Why is "annoy" in there at all? (2) What is the effect of including it -- can someone really be convicted for simply being "annoying" online?
Let's start with (1). Section 223 was added to the federal code by the Communications Act of 1934. This Act was Congress' first real venture into regulating wire and radio communications in the United States; it did not include any version of section 223. Section 223 was added in 1968, by Public Law. No. 90-299, 82 Stat. 112. The purpose was to address what Congress saw as an evolving evil:
"Since its invention, the telephone has been the source of many and great benefits to the American people. But recently its use has been perverted by some to make it an instrument for inflicting incalculable fear, abuse, annoyance, hardship, disgust, and grief on innocent victims. . . It is hard to imagine the terror caused to an innocent person when she answers the telephone, perhaps late at night, to hear nothing but a tirade of threats, curses, and obscenities, or equally frightening, to hear only heavy breathing."
H.R. Report 90-1109, 1968 U.S.C.C.A.N. 1915, 1916 (1968).
H.R. Report 90-1109, 1968 U.S.C.C.A.N. 1915, 1916 (1968).
The 1968 legislation, therefore, specifically targeted misuse of the telephone. The language that now appears in section 223(a)(1)(C) was in this original version of section 223, though it then read as follows: "[Whoever] makes a telephone call, whether or not conversation ensues, without disclosing his identity and with intent to annoy, abuse, threaten, or harass any person at the called number" commits a federal crime. This language migrated to a new section 223(a) in 1983, when Congress amended the statute to add language criminalizing the use of a telephone to make an "obscene or indecent communication for commercial purposes to any person under eighteen years of age or to any other person without that person's consent". This expansion of the statute was prompted by public concern about children's accessing dial-a-porn services. See John C. Cleary, Telephone Pornography: First Amendment Constraints on Shielding Children from Dial-a-porn, 22 Harvard Journal on Legislation 503 (1985).
The statute was amended again in 1996, by the Communications Decency Act, Public Law No. 104-104 sec. 502, 110 Stat. 56 (1996). The CDA made several changes in section 223, one of which was to expand it to encompass the use of a "telecommunications device," as well as a telephone. It also added provisions criminalizing the use of a telephone or telecommunications device to transmit an obscene or "indecent" communication to a minor.
So, the answer to question (1) is that "annoy" is an artifact that dates back to the original, 1968 version of the statute, which was intended to criminalize threatening and generally harassing phone calls. The legislative history for that provision does not parse the various terms used in the original section 223 -- annoy, abuse, threaten, harass -- into separate criminal acts. Instead, it seems to combine all four into a single "evil" at which the statute is directed. See H.R. Report 90-1109, 1968 U.S.C.C.A.N. 1915, 1916 (1968). Congress solicited the Department of Justice's views on the provision then-Attorney General Warren Christopher replied. In his letter, he refers to the terms as a "descriptive series" and seems to imply that the "series" denotes harassment.
This, finally, brings us to question #2: What is the effect of including "annoy" in section 223? Is it intended to be a severable provision and a distinct offense?
The history lesson provided above indicates this was not Congress' intent. Congress meant to criminalize using a telephone to harass others, and some versions of what ultimately became section 223 only referred to using a phone to "harass" someone. See H.R. Report 90-1109, 1968 U.S.C.C.A.N. 1915, 1916 (1968). It is therefore logical to construe the four words (annoy, abuse, threaten and harass) included in the final version as a single collective term denoting the act of harassing someone.
Any other result would almost certainly violate the First Amendment. It is a basic principle of constitutional law that a criminal statute must "provide the kind of notice that will enable ordinary people to understand what conduct it prohibits". City of Chicago v. Morales, 527 U.S. 41, 56 (Supreme Court 1999). The Supreme Court has held that a criminal statute violates due process if it "is so vague and standardless" it " leaves the public uncertain as to the conduct it prohibits". Giaccio v. Pennsylvania, 382 U.S. 399, 402-403 (Supreme Court 1966).
In Coates v. City of Cincinnati, 402 U.S. 611 (1971), the Supreme Court held that an ordinance that made it unlawful for three or more people to assemble and conduct themselves in an "annoying" manner was unconstitutionally vague and therefore could not be enforced. The Court explained that conduct which "annoys some people does not annoy others. Thus, the ordinance is vague, not in the sense that it requires a person to conform his conduct to an imprecise but comprehensible normative standard, but rather in the sense that no standard of conduct is specified at all." 402 U.S. at 614.
The U.S. Court of Appeals for the Sixth Circuit reached a similar conclusion in 2004, as to the provisions of section 213. Erik Bowker was convicted of violating section 213 as it existed prior to the recent amendment, and appealed, arguing that the statute was unconstitutionally vague. United States v. Bowker, 372 F.3d 365 (6th Cir. 2004). Bowkder cited the Coates decision for the proposition that a statute which outlaws "annoying" behavior is so vague as to violate due process. The Sixth Circuit agreed that "`annoy', standing alone" can raise due process issues. It upheld section 213, however, because it found that "the words annoy, abuse, threaten or harass should be read together to be given similar meanings. Any vagueness associated with the word `annoy' is mitigated by the fact that the meanings of `threaten' and `harass' can easily be ascertained and have generally accepted meanings." 372 F.3d at 383.
So, where does all this leave us?
It means, that if a federal prosecutor were to bring charges based solely on the claim someone used Internet transmissions, anonymously, for the single purpose of "annoying" another person, the charge would almost certainly be thrown out as being unconstitutionally vague. (it could also be attacked as being contrary to the legislative history of the provision, for the reasons set out above).
It very likely means that, as a practical matter, charges under the statute will be brought only when the offender's conduct rises far beyond the level of "annoying" someone (which was true in the Bowker case).
It means that the language of section 213 is facially overbroad, but courts can still uphold the constitutionality of the statute by using the Sixth Circuit's approach, outlined above.
It does not mean there is not cause for aggravation, maybe even some level of concern. But it almost certainly does not mean that we have to fear a rash of "annoy" prosecutions in the near future.
Sunday, February 05, 2006
Organized crime
This is a mug shot for Frank Costello, a notorious and very influential mobster in the 1940's and '50's.
In an earlier post, I took issue with a chapter in a 1983 book on computer crime. The chapter focused on the Mafia's involvement in computer crime; I said I think the author was wrong, both as to the state of affairs when he was writing and as to how cybercrime has evolved, at least to this point.
While I am sure the Mafia uses computers and the Internet in various ways, merely using a computer does not transform traditional criminal activity into cybercrime. It is, for example, increasingly common for drug dealers to use computers to keep track of their inventory, their sales and other matters. A drug dealer's using a computer for this purpose does not transform her drug-dealing into a cybercrime, just as a blackmailer's use of a computer to write the email or letter he sends to his victim does not transform his activity into cyber-blackmail.
Now, having said all that, I do happen to believe that cyberspace is, and will be, fertile ground for organized crime. . . . albeit a new type of organized crime.
In Organized Cybercrime, How Cyberspace May Affect the Structure of Criminal Relationships, 4 N.C. J. L. & Tech. 1 (2002), http://www.jolt.unc.edu/Vol4_I1/Web/Brenner-V4I1.pdf, I argue that cyberspace will give rise to new, more flexible arrays of organized criminal activity. As the article explains, the modern Mafia was "invented" in the early decades of the twentieth century and, for that reason, conformed to the organizational model that was influential in other sectors of society, such as commerce, industry and the military. The dominant model at that time, and for many years before, was the hierarchical organization, with fixed, rigidly-ordered relationships among the members of the organization. As I explain in the article, this type of organization is effective for orchestrating activity in the real-world.
I contend that this traditional, hierachical model is neither necessary nor particularly effective for orchestrating activity in the networked world of cyberspace. As I explain in the article, I think we will see (are, in fact, already seeing) fluid, shifting organizational patterns among cybercriminals.
As I also explain in the article, I think these more volatile forms of criminal organization will pose great challenges for law enforcement. In Frank Costello's time, the Mafia had stable, identifiable leadership and an equally stable, identifiable cadre of minions to carry out the directives of the leaders. This type of fixed, persistent organizational structure makes it relatively easy, over time, to identify critical personnel and map the organization's activities. This becomes much more difficult when criminal organizations become situational -- coalitions that develop to carry out particular activity and then disappear or merge into another evolving criminal coalition.
In an earlier post, I took issue with a chapter in a 1983 book on computer crime. The chapter focused on the Mafia's involvement in computer crime; I said I think the author was wrong, both as to the state of affairs when he was writing and as to how cybercrime has evolved, at least to this point.
While I am sure the Mafia uses computers and the Internet in various ways, merely using a computer does not transform traditional criminal activity into cybercrime. It is, for example, increasingly common for drug dealers to use computers to keep track of their inventory, their sales and other matters. A drug dealer's using a computer for this purpose does not transform her drug-dealing into a cybercrime, just as a blackmailer's use of a computer to write the email or letter he sends to his victim does not transform his activity into cyber-blackmail.
Now, having said all that, I do happen to believe that cyberspace is, and will be, fertile ground for organized crime. . . . albeit a new type of organized crime.
In Organized Cybercrime, How Cyberspace May Affect the Structure of Criminal Relationships, 4 N.C. J. L. & Tech. 1 (2002), http://www.jolt.unc.edu/Vol4_I1/Web/Brenner-V4I1.pdf, I argue that cyberspace will give rise to new, more flexible arrays of organized criminal activity. As the article explains, the modern Mafia was "invented" in the early decades of the twentieth century and, for that reason, conformed to the organizational model that was influential in other sectors of society, such as commerce, industry and the military. The dominant model at that time, and for many years before, was the hierarchical organization, with fixed, rigidly-ordered relationships among the members of the organization. As I explain in the article, this type of organization is effective for orchestrating activity in the real-world.
I contend that this traditional, hierachical model is neither necessary nor particularly effective for orchestrating activity in the networked world of cyberspace. As I explain in the article, I think we will see (are, in fact, already seeing) fluid, shifting organizational patterns among cybercriminals.
As I also explain in the article, I think these more volatile forms of criminal organization will pose great challenges for law enforcement. In Frank Costello's time, the Mafia had stable, identifiable leadership and an equally stable, identifiable cadre of minions to carry out the directives of the leaders. This type of fixed, persistent organizational structure makes it relatively easy, over time, to identify critical personnel and map the organization's activities. This becomes much more difficult when criminal organizations become situational -- coalitions that develop to carry out particular activity and then disappear or merge into another evolving criminal coalition.
Saturday, February 04, 2006
In my last post, I talked about how we tend to overlook the threat from insiders because we have become so focused on the external threat -- break-ins by a hacker. I want to follow up with some observations on an issue that arises with regard to attacks by "insiders," who are usually disgruntled employees.
In the U.S., the federal system and every state make it a crime to gain "unauthorized access" to a computer system. This crime reaches the conduct a noted above: an outsider who is not supposed to be able to access a computer system gains access by compromising the security that was supposed to keep him out.
There is, though, another kind of "unauthorized access," one that is outlawed in many states and, in some forms, at the federal level, as well. This takes the form of an insider's exceeding the access she legitimately has to a computer system. This type of conduct can be problematic for the law because you are dealing with someone who is authorized to access a computer system, at least for certain purposes; the question of criminal liability arises when she either goes beyond the scope of her authorized access or uses her authorized access for illegitimate purposes.
For example, on October 18, 2001, Philadelphia Police Officer Gina McFadden was on patrol with her partner. That day, the computer in McFadden's patrol car, likek the computer in all the other patrol cars, was broadcasting a message about a missing truck containing hazardous materials. For some incomprehensible reason, at 1:00 that afternoon McFadden used the computer in her patrol car to transmit a message that ostensibly came from terrorists; in profane language, the message stated that it was frome people who hated America had "antrhax in the back of our car". (State v. McFadden, 850 A.2d 1290 (Pa. Super. Ct. 2004)). The investigation launched into this transmission focused on McFadden, who ultimately admitted sending the message.
She was charged with an convicted of "intentionally and without authorization" accessing a computer system. (18 Pa. Cons. Stat. Ann. sec. 3933(a)(2)). McFadden argued that she was improperly convicted because she was authorized to use the computer in her patrol car. The appellate court rejected this argument, explaning that while McFadden was authorized to access the computer for purposes "of official police business, she was not authorized to access the computer for any other purposes. . . . She certainly was not authorized to access the computer for the purpose of distributing a message which implied that a Philadelphia police car had been contaminated with anthrax by terrorists." (State v. McFadden, 850 A.2d 1290 (Pa. Super. Ct. 2004)).
What the court did not explain, of course, is precisely how McFadden knew she was not authorized to do this; common sense tells us that her conduct was beyond the pale, but common sense cannot substitute for legal standards when criminal liability is at issue. McFadden's crime is more accurately described as "exceeding authorized access." This captures the "insider" aspect of the offense. We do not know if she sent the bizarre message because she was angry at the police department that employed her and wanted to strike back, or whether she simply had an unfortunate sense of humor.
There are many "insider" cases, but one from Georgia captures the peculiar difficulties that can arise when a trusted insider goes rogue. Some years ago, Sam Fugarino worked as a computer programmer for a company that designed software for surveyons. (Fugarino v. State, 243 Ga. App. 268, 531 S.E.2d 187 (Ga. App. Ct. 2000)). He had become a "difficult" employee, but went around the bend when the company hired a new worker, in a completely unrelated position.
Sam became visibly upset, telling a co-worker that the "code was his product" and "no one else was going to work on his code". The other employee saw that Sam was deleting massive amounts of files, so that whole pages of code were disappearing before this employee's eyes. The employee ran to the owner of the company, who came to Sam's desk. Sam told the owner that the "code was his" and that the owner would never "get to make any money" from it. The owner managed to convince Sam to leave the premises, but then discovered Sam had added layers of password protection to the computer system, the net effect of which was to lock the owner and other employees out of the program Sam had been designing.
The upshot of all this was that Sam was charged with "computer trespass" under Georgia law. More precisely, he was charged with using a computer system "with knowledge that such use is without authority" and deleting data from that system. (Ga. Code sec. 16-9-93(b)). Sam was tried, convicted and appealed, claiming that his use of the computer system was not "without authority". Sam, of course, had full access to the computer system; and as a programmer whose job was developing software, he was authorized to use his access not only to write code but also to delete code.
The Georgia appellate court upheld Sam's conviction, using a common-sense, "you should have known what you were doing was wrong" approach very similar to that used by the McFadden court. It noted that at trial the owner of the company testified he had not given Sam authority to delete "portions of the company's program" . . . which ignores the fact that Sam clearly did have authority to do precisely this.
The issue is one of degree: Sam was authorized to delete program code as part of his work developing software; the problem is that he clearly went too far, that he was apparently bent on erasing all of the program code. Clearly, the owner had not specifically told Sam that he was not authorized to delete an entire program; the need to do so had probably never occurred to him.
The Fugarino case illustrates a difficult question that arises when "insiders" are prosecuted for "exceeding authorized access." How, precisely, is someone to know when exceeds authorized access? Relying on the common-sense, "you-should-have-known-it-when-you-did-it" approach taken by these courts is, I would argue, quite unsatisfactory. The over-the-top nature of the ocnduct at issue in these cases may make the approach seem reasonable, but in fact it is not.
Every organization has a host of trusted "insiders" who have authorized access, in varying degrees, to the organization's computer system. Like Sam's employer, most organizations seem to assume that insiders understand the scope of their authorized access and will abide by that understanding. This assumption no doubt derives from our experience with physical security. it is relatively easy to deny employees access to physical spaces; physical boundaries are fixed and obvious. Assume Sam had a key to his own office, but not to his employer's office. If Sam had been found in his employer's (formerly) locked office shredding documents, he could not credibly have claimed that his "access" to the office and the files locked inside was "authorized." It would be reasonable to infer from his conduct (somehow breaking into a locked office) that he knew he was not authorized to be there, that he was doing something "wrong."
Virtual boundaries tend to be invisible and mutable. In a literal sense, Sam did nothing he was not authorized to do; he did not (virtually) break into a locked area and attack data shielded inside. He was authorized to delete code and he deleted code.
As a matter of simple fairness, criminal law demands that one be put on notice as to what is, and is not, forbidden. The question raised by cases like these is precisely how we do this for the "insiders" who legitmately have access to our computer systems.
In the U.S., the federal system and every state make it a crime to gain "unauthorized access" to a computer system. This crime reaches the conduct a noted above: an outsider who is not supposed to be able to access a computer system gains access by compromising the security that was supposed to keep him out.
There is, though, another kind of "unauthorized access," one that is outlawed in many states and, in some forms, at the federal level, as well. This takes the form of an insider's exceeding the access she legitimately has to a computer system. This type of conduct can be problematic for the law because you are dealing with someone who is authorized to access a computer system, at least for certain purposes; the question of criminal liability arises when she either goes beyond the scope of her authorized access or uses her authorized access for illegitimate purposes.
For example, on October 18, 2001, Philadelphia Police Officer Gina McFadden was on patrol with her partner. That day, the computer in McFadden's patrol car, likek the computer in all the other patrol cars, was broadcasting a message about a missing truck containing hazardous materials. For some incomprehensible reason, at 1:00 that afternoon McFadden used the computer in her patrol car to transmit a message that ostensibly came from terrorists; in profane language, the message stated that it was frome people who hated America had "antrhax in the back of our car". (State v. McFadden, 850 A.2d 1290 (Pa. Super. Ct. 2004)). The investigation launched into this transmission focused on McFadden, who ultimately admitted sending the message.
She was charged with an convicted of "intentionally and without authorization" accessing a computer system. (18 Pa. Cons. Stat. Ann. sec. 3933(a)(2)). McFadden argued that she was improperly convicted because she was authorized to use the computer in her patrol car. The appellate court rejected this argument, explaning that while McFadden was authorized to access the computer for purposes "of official police business, she was not authorized to access the computer for any other purposes. . . . She certainly was not authorized to access the computer for the purpose of distributing a message which implied that a Philadelphia police car had been contaminated with anthrax by terrorists." (State v. McFadden, 850 A.2d 1290 (Pa. Super. Ct. 2004)).
What the court did not explain, of course, is precisely how McFadden knew she was not authorized to do this; common sense tells us that her conduct was beyond the pale, but common sense cannot substitute for legal standards when criminal liability is at issue. McFadden's crime is more accurately described as "exceeding authorized access." This captures the "insider" aspect of the offense. We do not know if she sent the bizarre message because she was angry at the police department that employed her and wanted to strike back, or whether she simply had an unfortunate sense of humor.
There are many "insider" cases, but one from Georgia captures the peculiar difficulties that can arise when a trusted insider goes rogue. Some years ago, Sam Fugarino worked as a computer programmer for a company that designed software for surveyons. (Fugarino v. State, 243 Ga. App. 268, 531 S.E.2d 187 (Ga. App. Ct. 2000)). He had become a "difficult" employee, but went around the bend when the company hired a new worker, in a completely unrelated position.
Sam became visibly upset, telling a co-worker that the "code was his product" and "no one else was going to work on his code". The other employee saw that Sam was deleting massive amounts of files, so that whole pages of code were disappearing before this employee's eyes. The employee ran to the owner of the company, who came to Sam's desk. Sam told the owner that the "code was his" and that the owner would never "get to make any money" from it. The owner managed to convince Sam to leave the premises, but then discovered Sam had added layers of password protection to the computer system, the net effect of which was to lock the owner and other employees out of the program Sam had been designing.
The upshot of all this was that Sam was charged with "computer trespass" under Georgia law. More precisely, he was charged with using a computer system "with knowledge that such use is without authority" and deleting data from that system. (Ga. Code sec. 16-9-93(b)). Sam was tried, convicted and appealed, claiming that his use of the computer system was not "without authority". Sam, of course, had full access to the computer system; and as a programmer whose job was developing software, he was authorized to use his access not only to write code but also to delete code.
The Georgia appellate court upheld Sam's conviction, using a common-sense, "you should have known what you were doing was wrong" approach very similar to that used by the McFadden court. It noted that at trial the owner of the company testified he had not given Sam authority to delete "portions of the company's program" . . . which ignores the fact that Sam clearly did have authority to do precisely this.
The issue is one of degree: Sam was authorized to delete program code as part of his work developing software; the problem is that he clearly went too far, that he was apparently bent on erasing all of the program code. Clearly, the owner had not specifically told Sam that he was not authorized to delete an entire program; the need to do so had probably never occurred to him.
The Fugarino case illustrates a difficult question that arises when "insiders" are prosecuted for "exceeding authorized access." How, precisely, is someone to know when exceeds authorized access? Relying on the common-sense, "you-should-have-known-it-when-you-did-it" approach taken by these courts is, I would argue, quite unsatisfactory. The over-the-top nature of the ocnduct at issue in these cases may make the approach seem reasonable, but in fact it is not.
Every organization has a host of trusted "insiders" who have authorized access, in varying degrees, to the organization's computer system. Like Sam's employer, most organizations seem to assume that insiders understand the scope of their authorized access and will abide by that understanding. This assumption no doubt derives from our experience with physical security. it is relatively easy to deny employees access to physical spaces; physical boundaries are fixed and obvious. Assume Sam had a key to his own office, but not to his employer's office. If Sam had been found in his employer's (formerly) locked office shredding documents, he could not credibly have claimed that his "access" to the office and the files locked inside was "authorized." It would be reasonable to infer from his conduct (somehow breaking into a locked office) that he knew he was not authorized to be there, that he was doing something "wrong."
Virtual boundaries tend to be invisible and mutable. In a literal sense, Sam did nothing he was not authorized to do; he did not (virtually) break into a locked area and attack data shielded inside. He was authorized to delete code and he deleted code.
As a matter of simple fairness, criminal law demands that one be put on notice as to what is, and is not, forbidden. The question raised by cases like these is precisely how we do this for the "insiders" who legitmately have access to our computer systems.
Friday, February 03, 2006
History lesson
I'm reading a book on computer crime (How to Prevent Computer Crime by August Bequai) that was published in 1983.
I'm reading it because I thought it would be interesting to see what people were saying about computer crime twenty or even thirty years ago. I Amazoned some used books on the subject, this one arrived first, and so it's my initial venture into the historical literature of cybercrime.
This particular book is interesting for two reasons.
One is that although it was published in 1983, it was clearly written at a time when the notion of the Internet had not entered the public consciousness. The book therefore focuses primarily on what we could call "insider attacks" (more on those below) -- crimes committed by someone who is in physical proximity to the computer or computer system that is involved in the commission of cybercrime.
As a result , the book addresses some activities we do not hear much about today, things like vandalizing a computer by throwing paint on it. It does discuss activities we still deal with, such as using computers to commit fraud, to embezzle funds and to steal trade secrets. It cites a "computer caper" from California in which a gang of "`computernicks'" used their access to computers to provide "improved ratings to consumers with poor credit histories." Bequai, How to Prevent Computer Crime p.41). As far as I can tell (so far), neither the text nor the glossary uses the term "hacker." And while there are references to people having, and using, computers at home, and to businesses using computers to communicate via "telephone lines," there is no treatment of networked crime, such as computer intrusions coming from "outsiders."
The book has what seems, to me, to be a peculiar chapter on "infiltration by organized crime." As far as I know, the Mafia (which is what the book means by "organized crime") has never been a player in computer crime. The chapter suggests (very problematically, IMHO) that the Mafia was using computers to facilitate drug-dealing, loansharking, labor racketeering, thefts of cargo and prostitution. I'm dubious. What is interesting is that the author suggests that "the syndicate" could use computers to extort funds from a business, say, by attacking or threatening to attack its computers. The prediction ultimately came true, though it's not the Mafia we have to worry about in these attacks.
So, one reason the book is interesting is that it is, in effect, quaint . . . We learn how cybercrime was perceived almost a quarter century ago.
The other reason the book is interesting lies in what it can teach us: Since we are so immersed in a network culture, we tend to associate "cybercrime" with outsiders, with hackers who use their computer skills and the Internet to "break into" unsuspecting computer systems and wreak havoc, on some level. Unlike the author of this book, we tend to overlook the threat posed by the insider, by the employee or former employee or temp or consultant who steals company secrets or sabotages data for profit, for revenge or for "fun."
A 2004 study by the U.S. Secret Service and CERT found that the insider threat was a major problem in the banking and financial sector. And a study released at the end of 2005 found that the insider threat is an equally serious problem in Europe.
I'm reading it because I thought it would be interesting to see what people were saying about computer crime twenty or even thirty years ago. I Amazoned some used books on the subject, this one arrived first, and so it's my initial venture into the historical literature of cybercrime.
This particular book is interesting for two reasons.
One is that although it was published in 1983, it was clearly written at a time when the notion of the Internet had not entered the public consciousness. The book therefore focuses primarily on what we could call "insider attacks" (more on those below) -- crimes committed by someone who is in physical proximity to the computer or computer system that is involved in the commission of cybercrime.
As a result , the book addresses some activities we do not hear much about today, things like vandalizing a computer by throwing paint on it. It does discuss activities we still deal with, such as using computers to commit fraud, to embezzle funds and to steal trade secrets. It cites a "computer caper" from California in which a gang of "`computernicks'" used their access to computers to provide "improved ratings to consumers with poor credit histories." Bequai, How to Prevent Computer Crime p.41). As far as I can tell (so far), neither the text nor the glossary uses the term "hacker." And while there are references to people having, and using, computers at home, and to businesses using computers to communicate via "telephone lines," there is no treatment of networked crime, such as computer intrusions coming from "outsiders."
The book has what seems, to me, to be a peculiar chapter on "infiltration by organized crime." As far as I know, the Mafia (which is what the book means by "organized crime") has never been a player in computer crime. The chapter suggests (very problematically, IMHO) that the Mafia was using computers to facilitate drug-dealing, loansharking, labor racketeering, thefts of cargo and prostitution. I'm dubious. What is interesting is that the author suggests that "the syndicate" could use computers to extort funds from a business, say, by attacking or threatening to attack its computers. The prediction ultimately came true, though it's not the Mafia we have to worry about in these attacks.
So, one reason the book is interesting is that it is, in effect, quaint . . . We learn how cybercrime was perceived almost a quarter century ago.
The other reason the book is interesting lies in what it can teach us: Since we are so immersed in a network culture, we tend to associate "cybercrime" with outsiders, with hackers who use their computer skills and the Internet to "break into" unsuspecting computer systems and wreak havoc, on some level. Unlike the author of this book, we tend to overlook the threat posed by the insider, by the employee or former employee or temp or consultant who steals company secrets or sabotages data for profit, for revenge or for "fun."
A 2004 study by the U.S. Secret Service and CERT found that the insider threat was a major problem in the banking and financial sector. And a study released at the end of 2005 found that the insider threat is an equally serious problem in Europe.
Thursday, February 02, 2006
Robin Hood . . .
Cybercrimes fall, I would argue, into two basic categories: Those that are committed for money; and those that are not.
The vast majority of cybercrimes (fraud, extortion, identity theft, etc.) are clearly committed for money -- to enrich the person who commits the cybercrime.
But a story I read yesterday reminded me of the possibilities of becoming an online Robin Hood.
New Zealander Thomas Gawith is apparently a hacker with a sense of social obligation. He broke into six bank accounts to which he had no legal claim, extracted roughly $13,700 from the accounts and transferred the money to individuals whom he deemed to be "poor" and in need of funds. When his exploit came to light and he was questioned by local police, he said he didn't think he'd done anything wrong because he did not keep any of the money for himself. The police disagreed: He was charged with six counts of "computer crime," pled guilty to all or some of those counts, and will be sentenced on March 2.
Gawith reminds me of a similar, though anonymous exploit I read about several years ago: An unknown hacker accessed a server used by an online casino and altered its programming so that, for an hour or two, everyone who played poker or the slots won. The people playing those games won roughly $1.9 million before the casino discovered what was happening; according to the news report I saw, the casino honored its commitment to the players and paid up.
It's interesting to note that there are at least a few cyber-Robin Hoods out there.
It's also interesting to contemplate how these activities fall into the category of cybercrime:
Gawith clearly committed "computer crime" in the sense of gaining unauthorized access to the six bank accounts he looted. And like Robin Hood, he committed theft because, while he did not keep the money he took, he did take the funds from their lawful owners against their wishes (and without their knowledge).
The anonymous casino Robin Hood, on the other hand, did not personally "take" any money from the casino. He merely diverted casino funds to people playing poker and slots. Our ability to charge him with theft -- at least in any traditional sense -- is further complicated by the fact that we presumably do not know, cannot know, how many of those players would actually have won had Robin Hood not intervened.
We could always charge the anonymous casino Robin Hood with gaining unauthorized access to the casino's server . . . but that seems somehow inadequate, given what he accomplished with that access.
The vast majority of cybercrimes (fraud, extortion, identity theft, etc.) are clearly committed for money -- to enrich the person who commits the cybercrime.
But a story I read yesterday reminded me of the possibilities of becoming an online Robin Hood.
New Zealander Thomas Gawith is apparently a hacker with a sense of social obligation. He broke into six bank accounts to which he had no legal claim, extracted roughly $13,700 from the accounts and transferred the money to individuals whom he deemed to be "poor" and in need of funds. When his exploit came to light and he was questioned by local police, he said he didn't think he'd done anything wrong because he did not keep any of the money for himself. The police disagreed: He was charged with six counts of "computer crime," pled guilty to all or some of those counts, and will be sentenced on March 2.
Gawith reminds me of a similar, though anonymous exploit I read about several years ago: An unknown hacker accessed a server used by an online casino and altered its programming so that, for an hour or two, everyone who played poker or the slots won. The people playing those games won roughly $1.9 million before the casino discovered what was happening; according to the news report I saw, the casino honored its commitment to the players and paid up.
It's interesting to note that there are at least a few cyber-Robin Hoods out there.
It's also interesting to contemplate how these activities fall into the category of cybercrime:
Gawith clearly committed "computer crime" in the sense of gaining unauthorized access to the six bank accounts he looted. And like Robin Hood, he committed theft because, while he did not keep the money he took, he did take the funds from their lawful owners against their wishes (and without their knowledge).
The anonymous casino Robin Hood, on the other hand, did not personally "take" any money from the casino. He merely diverted casino funds to people playing poker and slots. Our ability to charge him with theft -- at least in any traditional sense -- is further complicated by the fact that we presumably do not know, cannot know, how many of those players would actually have won had Robin Hood not intervened.
We could always charge the anonymous casino Robin Hood with gaining unauthorized access to the casino's server . . . but that seems somehow inadequate, given what he accomplished with that access.
Wednesday, February 01, 2006
Terminology
This is a blog about “cybercrime.”
It seems appropriate, then, to begin by defining what we’ll be talking about – by trying to define what “cybercrime” is.
I checked an online dictionary and found that it defines “cybercrime” as “a crime committed on a computer network”. I think that’s a good definition, as far as it goes.
The problem I have with this definition is that, as an American lawyer, I have to be able to fit the concept of “cybercrime” into the specific legal framework we use in the United States . . . and into the more general legal framework that ties together legal systems around the world.
And that leads me to ask several questions: What, precisely, is “cybercrime?” Is “cybercrime” different from plain old “crime?” If so, how? If not, if “cybercrime” is really just a boutique version of “crime,” then why do we need a new term for it?
Let’s start by trying to parse out what “cybercrime” is and what it is not. The perfectly logical definition quoted above says “cybercrime” is “a crime” that is committed on a computer network. I’d revise that a bit . . . for a couple of reasons.
One is that this definition assumes that every “cybercrime” constitutes nothing more than the commission of a traditional “crime,” albeit by different means (by using a computer network). As I’ve argued elsewhere, that is true for much of the cybercrime we have seen so far. For example, online fraud such as the 419 scam is nothing new, as far as law is concerned; it’s simply “old wine in new bottles,” old crime in a slightly new guise.
Until the twentieth century, people had only two ways of defrauding others: They could do it face to face by, say, offering to sell someone the Brooklyn Bridge for a very good price; or they could do the same thing by using snail mail. The proliferation of telephones in the twentieth century made it possible for scam artists to use the telephone to sell the Bridge, again at a very good price. And now we see twenty-first century versions of the same thing migrating online.
As I’ve explained elsewhere, the same thing is happening with other traditional crimes, such as theft, extortion, harassment, vandalism and trespassing. So far, it seems that a few traditional crimes -- like rape and bigamy -- probably will not migrate online because the commission of these particular crimes requires physical activity that cannot occur online, at least not unless and until we revise our definitions of these crimes.
The same cannot be said of homicide: While we have no documented instances in which computer technology was used to take human life, this is certainly conceivable, and will no doubt occur. Those who speculate on such things have postulated instances in which, say, someone hacks into the database of a hospital and kills people by altering the dosage of their medication. The killer would probably find this a particularly clever way to commit murder, since the crime might never be discovered. The deaths might be erroneously put down to negligence on the part of hospital staff; and even if they were discovered, it might be very difficult to determine which of the victims was the intended target of the unknown killer.
But I digress. My point is that while most of the cybercrime we have seen to date is simply the commission of traditional crimes by new means, this is not true of all cybercrime. As I explain elsewhere, we clearly have one completely new cybercrime: a distributed denial of service (DDoS) attack. A DDoS attack overloads computer servers and effectively shuts down a website. In February of 2000, someone launched DDoS attacks that effectively shut down Amazon.com and eBay, among other sites.
DDoS attacks are increasingly used for extortion; someone launches an attack on a website, then stops the attack and explains to the owner of the website that attacks will continue unless and until the owner pays a sum for “protection” against such attacks. This simply represents the commission of an old crime (extortion) by new means. It is a tactic the Mafia was using over half a century ago, though they relied on arson instead of DDoS attacks.
But a “pure” DDoS attack such as the 2000 attacks on Amazon.com and eBay is not a traditional crime. It’s not theft, or fraud, or extortion or vandalism or burglary or any crime that was within a pre-twentieth century prosecutor’s repertoire. It is an example of a new type of crime, a “pure” cybercrime. As such, it requires that we create new law, which makes it a crime to launch such an attack. Otherwise, there is no crime, which is currently the situation in Britain; the UK’s 1990 Computer Misuse Act outlawed hacking and other online variants of traditional crime, but did not address DDoS attacks.
So, one reason I find the definition above unsatisfactoryis that it does not encompass the proposition that cybercrime can consist of committing “new” crimes – crimes we have not seen before, and that we may not have outlawed yet – as well as “old” crimes.
The other reason I take issue with the definition I quoted above is that it links the commission of cybercrime with the use of a “computer network.” This is usually true; in fact, the use of computer networks is probably the default model of cybercrime. But it is also possible that computer technology, but not network technology, can be used for illegal purposes. A non-networked computer can, for example, be used to counterfeit currency or to forge documents. In either instance, a computer, but not a computer network, is being used to commit an “old” crime.
This post has gone on too long, I suspect, so I shall stop . . . for now.
Subscribe to:
Posts (Atom)