I live in Dayton, Ohio, and because of that I read the local paper: the Dayton Daily News.
A day or so ago, the DDN had a story on identity theft, a version of which is available online. The hard-copy version was longer than the online one and included a photograph of a local prosecutor. The prosecutor, who will remain nameless here, has headed the fraud unit of the Montgomery County Prosecutor's Office for ten years (or so the DDN says), and I'm sure he is very experienced, very knowledgable about white-collar crime. Indeed, I believe I've heard anecdotal evidence to that effect, though I don't know the man myself.
Most of the hard-copy version of the article seemed to be an Associated Press-style story about identity theft, one with lots and lots of statistics . . . the gist of which is that we really don't have to worry much about identity theft because it's happening in the real-world more than it's happening online. The premise seemed to be that we don't need to worry that much about it because, since it's happening primarily in the real-world, it's nothing new.
Now, I had some problems with that part of the article because it seemed to be saying that online identity theft is not a problem since only a small percentage of identity theft can be attributed to commercial data breaches, that is, to companies' allowingconsumer data to be compromised. I have some reservations about that contention, and I also have some problems with equating online identity theft to compromised commercial databases. The article did not deal much with phishing or the other "personalized" types of identity theft, which I think was a shame. IMHO, it is important to educate people as much as possible about the various kinds of threats that exist online.
But all of that is minor carping on my part -- none of it is what prompted me to write this post. What prompted me to write this post was the role the local fraud prosecutor played in the (much longer hard-copy) version of the article. What was his role, you ask?
You might assume that, as a clearly-experienced and no doubt very-talented fraud prosecutor, his role would be that of a prosecutor who is handling/has handled identity theft cases. Nope -- his role in the article was as victim. It explains how he had his identity stolen.
Now, I find that peculiar, especially given the article's overall tone (don't worry about identity theft, it's nothing new, nothing strange). First of all, I suspect most prosecutors would not fall for real-world fraud schemes -- their experience and expertise would protect them. Here, though, we have not just a run of the mill prosecutor but the head of the local county prosecutor's fraud unit becoming a victim of identity theft. Doesn't that undercut the whole, "don't worry, identity theft is really not anything to be concerned about" tenor of the article?
Second, I found it interesting, given that tenor, that the article did not say anything about how the local prosecutor's office is handling, would handle or hypothetically might handle identity theft cases (whichever applies . . . though my pick would be the last alternative). The article doesn't say anything about that (presumably because I'm right and the last alternative, the hypothetical, is the correct choice) or even tell us if the identity theft who victimized the career prosecutor was ever prosecuted. Continuing it's reassuring tone, it tells us that the identity theft who picked on this prosecutor did so by running up charges on his credit card. The prosecutor noticed the charges, called the credit card company and had the charges removed so, as the article notes, the identity theft "didn't cost him a cent."
Well, that's reassuring, isn't it?
I''m sorry -- I'm venting a bit here and, in so doing, I really, really do not mean to be harsh with the DDN reporter who wrote the article or in any way be disrespectful to the prosecutor. It's just that they hit a nerve, as far as I am concerned.
See, I have spent a lot of time working with state and local law enforcement officers and with local prosecutors. From that experience, I know that cybercrime poses many, many challenges for prosecutors. I have often had police officers tell me they investigated a cybercrime case (one that was local enough it could be prosecuted in their state/county), put a good case together and then took the results of their investigation to a local prosecutor . . . who didn't want to touch the case, because cybercrime cases are "different," complex and time-consuming.
They're "different" because, as I have written about before, the law used to charge the person may be new, may be non-existent (which requires some creative extrapolation on the part of the prosecutor) or may raise difficult constitutional or other issues. They're complex for that reason, too. They're also complex because they can involve difficult issues concerning the intersection of law and technology.
These issues can arise with regard to charges against the defendant If, say, the defendant was an employee of a company and used her legisimate, employee access to its computer system to, say, delete files she was not supposed to delete, copy files she was not supposed to copy or browse through files she was not supposed to see, then the prosecutor will have to figure out what to charge her with. He cannot charge her with gaining "unauthorized access" to (hacking) the system because her employment gave her the right to access part of the system or to access all of the system for certain uses. So the prosecutor will have to figure out if he can legitimately charge her with "exceeding authorized access" (whatever that means) to the system, knowing that her lawyer will no doubt claim that everything she did was authorized. Trying to figure that out takes time (one reason why these cases are also time-consuming) and can be very difficult, especially if one has little or no understanding of computer systems.
If the prosecutor gets over that hurdle, there can be constitutional issues -- Fourth Amendment challenges to how the evidence was gathered -- and digital evidence issues -- challenges to the accuracy of the data being used as evidence against her. All of those issues can raise difficult questions about how law intersects with evolving technology.
So cybercrime cases can be a burden for prosecutors (just as they can be a burden for police officers) . . . which means prosecutors often may not want to deal with them. That is unfortunate. I can understand why prosecutors shy away from these cases, given everything I've just oulined plus the heavy caseloads they already have (involving real-world crimes, which are sometimes regarded as "realer" than cybercrimes).
The problem is that they're not going to go away, they're only going to increase in number, in the extent of the damage they do and in complexity. Not prosecuting these cases is only going to encourage more people to commit cybercrimes.
I remember a conversation I had a year ago with an economic crime detective in a major US city. He told me he keeps "catching" the same cybercrime perpetrators, putting together cases, taking the cases to prosecutors who decline to prosecute. He keeps trying, but is obviously becoming discouraged.
My point here is not to pick on the DDN or on the Dayton prosecutor whose identity was stolen. My point here, insofar as I really have one, is to point out that here, as well as in other areas, we are not doing a good job of dealing with cybercrime. I am not sure what the answer is, since it will take a lot of resources (money, personnel, expertise) to deal with this problem, and counties and parishes and states all do have other priorities. Real-world crimes -- blowing people up, killing them by other means, harming them by other means -- are an obvious and compelling priority.
I guess I just do not understand why we cannot do both.
Monday, June 26, 2006
Tuesday, June 20, 2006
"Toxic immersion"
I just finished Synthetic Worlds: The Business and Culture of Online Games, a book by Edward Castronova (University of Chicago Press, 2005). It raises a number of interesting issues, some of which I may address in future posts.
Today, I want to talk a bit about an issue Castronova raises toward the end of the book: “toxic immersion.” (Synthetic Worlds, page 238). He describes toxic immersion as “losing people to a space that, by any standard of human worth, dignity, and well-being, is not good for them.” Castronova unfortunately does not provide many details on what, precisely, he means by this. He does note that it would consist, at least in part, of having “synthetic worlds” (i.e., virtual realities) “become permanent homes for the conscious self.”
It is already apparent this could happen in various ways. As Castronova points out, the most extreme option is a Matrix scenario in which our bodies are maintained by machines while our minds roam virtual worlds. (Synthetic Worlds, page 238). Another possibility – raised by a British Telecom forecast – is that human consciousness would leave its physical host and migrate into cyberspace, or the version of cyberspace. (2005 BT Technology Timeline). Or there might be a less drastic scenario, one in which we spend much of our time plugged into cyberspace (or the future version . . . ) and the rest interacting with the real, physical world. Or . . . many others.
But I’m really not interested in "how toxic immersion occurs" scenarios. What I found interesting about Castronova’s take on toxic immersion is that he suggests it could justify state intervention to protect people from an experience "that, by any standard . . . is not good for them." (Synthetic Worlds, page 238).
I find this suggestion interesting because it reminds me of something I wondered about a few years ago, and then forgot about, in the press of dealing with other issues, other problems.
It occurred to me, a few years ago, that there could be some very interesting parallels between the way societies might deal with immersion in virtual realities and the way societies currently deal with drugs. Drugs (at least certain drugs) and virtual realities have something in common: They can both take us away from the real, physical world. Drugs do this in various ways: by blurring the edges of the real-world, by blunting our ability to experience the real-world or even, in the case of hallucinogens, transforming our experience of the real-world. Virtual realities go even further; they can take us away -- conceptually, anyway, from the real, physical world.
Historically, many cultures have had no difficulty whatsoever with the real-world-evading and/or -transforming qualities of various drugs. Indeed, some embraced the real-world-transforming qualities of drugs, incorporating drugs into their religious ceremonies. Other cultures, however, have historically rejected the real-world-evading and/or -transforming qualities of various drugs. As we know, this latter view has triumphed over the last century or so, and we live in a world in which access to drugs is carefully controlled and unauthorized access is punished as a crime.
It occurred to me, several years ago, that virtual realities can raise many of the same issues as real-world-evading and/or real-world-transforming drugs. Castronova's comments reminded me of my reflections on that issue because he clearly believes a "descent" into virtual reality would justify, as he says, "paternalistic" intervention by the state. Why, I wonder? Why, (I hope) you ask?
When I thought about this several years ago, I speculated that we might see a world in which the use of virtual reality was treated in a fashion analogous to the way we treat the use of (certain) drugs. That is, access to virtual reality would be . . . what? . . . controlled? licensed? monitored? penalized? . . . all for "our own good," as Castronova would have it.
What would justify this? If we reject, as I do, Puritanically-based knee-jerk reactions to any vaguely-hedonistic experience, what remains? The historic arguments for criminalizing drugs (as aggressively articulated by Harry Anslinger, the first U.S. "drug czar," in the 1930's) were that (certain) drugs (i) caused people to become violent, (ii) damaged users' physical health and/or (iii) resulted in their becoming parasites on society because they used drugs instead of working to support themselves and their families. (Alchohol somehow escaped being consigned to the outlawed "drug" category even though many/all of these "justifications" could be applied to it, as well.)
I can see similar arguments' being made with regard to the use of virtual reality, which currently consists primarily of multiple-user online games. When I first thought of that possibility, I was thinking primarily in terms of justifications (ii) and (iii) because I could see people's becoming so immersed in virtual reality that they tended to let other things slide. We are already beginning to see some of this, along with a societal reaction against it. As you may know, there have been a few instances in which people have died apparently as a result of playing online games without taking breaks (for food and sleep?).
These deaths, along with other not-really-identified evils resulting from intensive online gaming, have given rise to concerns about "online game addiction" and produced at least one effort to enact legislation that would limit the amount of time people could spend playing online games. I could be wrong but this looks to me like a first step, maybe a small first step but still a first step, toward what I was speculating about several years ago: treating the use of virtual reality as analogous to drugs, regulating the usage in various ways, maybe even eventually prohibiting usage of virtual reality by all/some segments of the population.
And what about factor (i), Harry Anslinger's favorite: the premise that the use of (drugs) virtual reality makes people violent? Well, I have noted, over the last year or three, articles appearing that link online game playing to increased violence and agression in the real-world. Although research to the contrary has also appeared, it looks to me like the online-games-cause-violence theorists are getting more play in the media. And perception is what counts. Harry Anslinger, for example, got marijuana outlawed by claiming that it caused people to becoming violent, very violent . . . incredible as that may seem today.
So where am I going with all this? I'm not really sure. I'm not saying that virtual reality/online gaming is analogous to (certain) drugs that (presumably) have undesirable effects which are sufficient to warrant their being controlled or outlawed. I'm not saying that at all. What I am suggesting is that there are perceived functional parallels between the two that may well result in virtual reality's being treated in a fashion analogous to the way we treat (certain) drugs.
So, who knows . . . maybe in ten or twenty or thirty or fifty years we will have a "Virtual Reality Control Strategy" and a "War on Virtual Reality."
What an absurd and depressing thought.
Today, I want to talk a bit about an issue Castronova raises toward the end of the book: “toxic immersion.” (Synthetic Worlds, page 238). He describes toxic immersion as “losing people to a space that, by any standard of human worth, dignity, and well-being, is not good for them.” Castronova unfortunately does not provide many details on what, precisely, he means by this. He does note that it would consist, at least in part, of having “synthetic worlds” (i.e., virtual realities) “become permanent homes for the conscious self.”
It is already apparent this could happen in various ways. As Castronova points out, the most extreme option is a Matrix scenario in which our bodies are maintained by machines while our minds roam virtual worlds. (Synthetic Worlds, page 238). Another possibility – raised by a British Telecom forecast – is that human consciousness would leave its physical host and migrate into cyberspace, or the version of cyberspace. (2005 BT Technology Timeline). Or there might be a less drastic scenario, one in which we spend much of our time plugged into cyberspace (or the future version . . . ) and the rest interacting with the real, physical world. Or . . . many others.
But I’m really not interested in "how toxic immersion occurs" scenarios. What I found interesting about Castronova’s take on toxic immersion is that he suggests it could justify state intervention to protect people from an experience "that, by any standard . . . is not good for them." (Synthetic Worlds, page 238).
I find this suggestion interesting because it reminds me of something I wondered about a few years ago, and then forgot about, in the press of dealing with other issues, other problems.
It occurred to me, a few years ago, that there could be some very interesting parallels between the way societies might deal with immersion in virtual realities and the way societies currently deal with drugs. Drugs (at least certain drugs) and virtual realities have something in common: They can both take us away from the real, physical world. Drugs do this in various ways: by blurring the edges of the real-world, by blunting our ability to experience the real-world or even, in the case of hallucinogens, transforming our experience of the real-world. Virtual realities go even further; they can take us away -- conceptually, anyway, from the real, physical world.
Historically, many cultures have had no difficulty whatsoever with the real-world-evading and/or -transforming qualities of various drugs. Indeed, some embraced the real-world-transforming qualities of drugs, incorporating drugs into their religious ceremonies. Other cultures, however, have historically rejected the real-world-evading and/or -transforming qualities of various drugs. As we know, this latter view has triumphed over the last century or so, and we live in a world in which access to drugs is carefully controlled and unauthorized access is punished as a crime.
It occurred to me, several years ago, that virtual realities can raise many of the same issues as real-world-evading and/or real-world-transforming drugs. Castronova's comments reminded me of my reflections on that issue because he clearly believes a "descent" into virtual reality would justify, as he says, "paternalistic" intervention by the state. Why, I wonder? Why, (I hope) you ask?
When I thought about this several years ago, I speculated that we might see a world in which the use of virtual reality was treated in a fashion analogous to the way we treat the use of (certain) drugs. That is, access to virtual reality would be . . . what? . . . controlled? licensed? monitored? penalized? . . . all for "our own good," as Castronova would have it.
What would justify this? If we reject, as I do, Puritanically-based knee-jerk reactions to any vaguely-hedonistic experience, what remains? The historic arguments for criminalizing drugs (as aggressively articulated by Harry Anslinger, the first U.S. "drug czar," in the 1930's) were that (certain) drugs (i) caused people to become violent, (ii) damaged users' physical health and/or (iii) resulted in their becoming parasites on society because they used drugs instead of working to support themselves and their families. (Alchohol somehow escaped being consigned to the outlawed "drug" category even though many/all of these "justifications" could be applied to it, as well.)
I can see similar arguments' being made with regard to the use of virtual reality, which currently consists primarily of multiple-user online games. When I first thought of that possibility, I was thinking primarily in terms of justifications (ii) and (iii) because I could see people's becoming so immersed in virtual reality that they tended to let other things slide. We are already beginning to see some of this, along with a societal reaction against it. As you may know, there have been a few instances in which people have died apparently as a result of playing online games without taking breaks (for food and sleep?).
These deaths, along with other not-really-identified evils resulting from intensive online gaming, have given rise to concerns about "online game addiction" and produced at least one effort to enact legislation that would limit the amount of time people could spend playing online games. I could be wrong but this looks to me like a first step, maybe a small first step but still a first step, toward what I was speculating about several years ago: treating the use of virtual reality as analogous to drugs, regulating the usage in various ways, maybe even eventually prohibiting usage of virtual reality by all/some segments of the population.
And what about factor (i), Harry Anslinger's favorite: the premise that the use of (drugs) virtual reality makes people violent? Well, I have noted, over the last year or three, articles appearing that link online game playing to increased violence and agression in the real-world. Although research to the contrary has also appeared, it looks to me like the online-games-cause-violence theorists are getting more play in the media. And perception is what counts. Harry Anslinger, for example, got marijuana outlawed by claiming that it caused people to becoming violent, very violent . . . incredible as that may seem today.
So where am I going with all this? I'm not really sure. I'm not saying that virtual reality/online gaming is analogous to (certain) drugs that (presumably) have undesirable effects which are sufficient to warrant their being controlled or outlawed. I'm not saying that at all. What I am suggesting is that there are perceived functional parallels between the two that may well result in virtual reality's being treated in a fashion analogous to the way we treat (certain) drugs.
So, who knows . . . maybe in ten or twenty or thirty or fifty years we will have a "Virtual Reality Control Strategy" and a "War on Virtual Reality."
What an absurd and depressing thought.
Saturday, June 17, 2006
Trojan horse defense
A Trojan horse program is a type of malware, or malicious software. Like other malware, it installs itself surreptitously on a computer; unlike other types of malware, a Trojan horse lets the person who disseminated it remotely control the computer(s) on which it installed itself. The person who controls the Trojan will have complete access to the data on the compromised computer and can copy it, delete it or put new data on the computer.
The last feature is what I want to talk about today. It's given rise to what is called the "Trojan horse defense." A friend and I wrote a law review article analyzing how prosecutors can rebut the defense. (Susan Brenner, Brian Carrier & Jef Henninger, The Trojan Horse Defense in Cybercrime Cases, 21 Santa Clara Computer and High Technology Law Journal 1 (2004)). The article focuses both on legal arguments and technical issues a prosecutor facing the defense can use to rebut it. It goes into a great deal of detail -- today, I want to talk generally about the Trojan horse defense (THD) and some of the issues it raises.
The THD became notorious in 2003, when Aaron Caffrey used in the United Kingdom. Caffrey was charged, basically, with hacking into the Port of Houston computers and causing them to shut down. His defense attorney conceded the attack came from Caffrey's laptop computer, but claimed Caffrey was not responsible for the attack, that he had, in effect, been "framed" by other hackers who installed Trojan horse programs on his laptop and used them to attack the Port of Houston computers. In an effort to rebut this defense, the prosecution pointed out that no trace of Trojan horse programs had been found on the laptop; the defense countered by explaining that the Trojan hourse programs had been "self-erasing" Trojans, so no trace would remain. The jury clearly bought the defense's argument, as it acquitted Caffrey.
This was not the first instance in which the THD had been used in the UK, but the Caffrey case received far more publicity than the earlier instance(s) in which the defense was raised. News stories pointed out that Caffrey's defense raised serious challenges for prosecutors. As one observer noted, the "case suggests that even if no evidence of a computer break-in is unearthed on a suspect's PC, they might still be able to successfully claim that they were not responsible for whatever their computer does, or what is found on its hard drive." And others pointed out that someone could establish the factual basis for such a defense by having Trojan horse programs on their computer.
As we note in the article, the THD is a new version of a very old defense: the SODDI defense (as it is known in the U.S.). SODDI stands for "some other dude did it." When a defendant raises a SODDI defense, he (or she) concedes that a crime was committed but blames someone else for its commission. The SODDI defense is usually not very successful in real-world prosecutions (the O.J. Simpson case is a major exception). When a defendant raises a SODDI defense in a prosecution for a traditional, real-world crime -- like, say, murder or rape -- he claims the crime was committed by an unknown someone else. Jurors tend to be skeptical of claimes like this, especially if, as is usually the case, the prosecution is able to link the defendant to the crime by showing motive, opportunity and/or incriminating evidence that is in his possession or can be traced to him (DNA, fingerprints, etc.). Jurors are skeptical of claims like this because they understand how the real-world works.
The SODDI defense has been much more successful in cybercrime cases because they involve a context which most jurors don't really understand, or understand enough to buy defense claims like Caffrey's contention about being framed by self-erasing Trojan horse programs.
(I'm not a technically trained person, so I cannot opine on the likelihood of self-erasing Trojans. I know people who are technically trained who do not believe they exist. If they do not exist now, I assume they will at some point, so I don't see this as a particularly important issue, at least not for the prosecution.)
In cybercrime cases, the SODDI defense turns the tables on the prosecution: In a criminal case, the prosecution has the burden of proving all the elements of the crime beyond a reasonable doubt and the defense has the burden of proving an affirmative defense by a preponderance of the evidence.
If a Trojan horse program is found on a defendant's computer, that would provide the factual basis for getting the defense to the jury . . . that along with testimony which establishes what a Trojan horse program is and what it does. Once the defense does this, the ball is now in the prosecution's court: The prosecution must rebut the defense, which means it must prove beyond a reasonable doubt that it was the defendant -- not Some Other Dude Using a Trojan Horse -- who committed the crim(s) charged. This is where the difficulty arises.
The prosecution now is obligated to prove a negative: that it was not Some Other Dude Using a Trojan Horse program who hacked the Port of Houston, collected child pornography or committed some other cybercrime. Proving a negative can be difficult, especially in this context.
As opposed to instances in which a defendant raises a SODDI defense in a real-world criminal case, the prosecution cannot rely on the jury's ability to use their common sense to assess the merits of and then reject the defense as implausible because the defense is grounded in what is still, for many, a distinctly "uncommon" context: the virtual environment of computes, hard drives and cyberspace. Some jurors may know nothing about technology, which really gives them no conceptual framework to use in judging the merits of a THD. This, I think, makes them something of a wild card; their decision to go with the prosecution or the defense may be made arbitrarily, a juror's equivalent of flipping a coin.
Other jurors may know a little about technology, enough to know what viruses are and to have a general idea of what they can do. As far as the prosecution is concerned, a little knowledge may be a dangerous thing: These jurors may understand enough about technology to be willing to believe that Trojan horses (and other types of malware) can do things they may not be able to do at all, or may not have been able to do given the facts in the case before them.
(I'm not sure where I come out on jurors who know a lot about technology. They might be able to analyze and reject the factual foundation of a shaky/untenable THD or they might over-analyze the evidence presented and so buy into the defense. I guess one reason I am not sure where I come out on these jurors is that I think they are likely to be very scarce in the jury pool.)
Assuming, as I think is reasonable, that the jury is made up of people with little or no knowledge of technology, how does the prosecution rebut the defense's presentation of a THD? It seems that the prosecution will have to dissect the technical basis of the defense to do so; the Caffrey prosecution showed that no Trojan horses were on Caffrey's laptop, and asked the jury to infer from this that it was Caffrey, not a Trojan horse program being used by someone else, who shut down the computers at the Port of Houston.
But if Trojan horses are found on the suspect's computer, the prosecution will have to get into the specifics of technology -- its capabilities and limitations -- to rebut the THD. This, I think, creates real difficulties for prosecutors, because it requires that they be able to explain abtruse, technical concepts and processes to a lay jury in a way laypeople can understand and can use that understanding to conduct a critical assessment of the THD presented to them. That can be a very difficult process; it will require, I think, not only expert witnesses, but the skillful use of graphics -- animations, diagrams, maybe physical exhibits -- that can really let jurors grasp what would have had to occur for the THD to be valid and why that did not occur (establishing, by inference, that the THD defense is invalid). Doing all that can be a huge undertaking for the average prosecutor/prosecutor's office, as it requires time, expertise and the money to pay for the creation of the necessary demonstrative evidence (animations, diagrams, etc.).
For now, I suspect the defense enjoys the advantage with regard to the THD, which is why I am surprised that we have not seen it used more in this country (it still seems be be used, often successfully, in the United Kingdom).
The only American case I know of in which it has been used successfully is an Alabama state tax fraud/tax evasion prosecution against Eugene Pitts, a Hoover, Alabama accountant. Pitts was accused of underreporting income on his tax returns for 1997, 1998 and 1999. He admitted there were errors on his returns for those years, but blamed the errors on a computer virus. Although prosecutors pointed out that the alleged virus did not affect the client tax returns Pitts prepared on the same computer, the jury acquitted him of all charges after deliberating for 3 hours . . . another "Caffrey verdict."
I assume the infrequency with which a THD is used in this country has something to do with the defense bar's familiarity, or unfamiliarty, with technology. Other than that, I cannot imagine why it does not show up more often, especially given the frequency with which the real-world variant of the SODDI defense is used.
Everything I have said in this post has been directed at the prosecution's burden and ability to rebut a THD defense. Everything I have said so far implicitly assumes that the invocation of the defense is frivolous as it was, IMHO, in the Caffrey and Pitts cases. And I think that is likely to be true in many (most?) of the cases in which a THD is used.
It will not, however, be true in every case. As people knowledgeable about computer technology will tell you, a Trojan horse program could easily be used to frame someone for a crime. While it seems exceedingly unlikely ("incredible") that a Trojan horse program could put 15,000 images of child pornography sorted into folders and sub-folders on someone's hard drive without their knowing it, a Trojan horse could be used to frame someone for, fraud, embezzlement or other crimes, even murder.
Think about it: Do you know everything that is on your hard drive . . . every file folder, every file? I can't imagine that you do, given the amount of data most of us acquire. And how many of us ever check to see what, exactly, is on our hard drive? Maybe other people do; I don't (I hope I am not inviting someone to frame me by admitting that . . . ).
The possibility makes me think of the old TV series, The Fugitive. In the TV series (and in the movie), Dr. Richard Kimble is adventitiously framed by the one-armed man who kills Kimble's wife. Kimble's SODDI defense (asserting that the mysterious one-armed man, whom only he saw, killed his wife) fails, and he is convicted of the crime. The same thing could be done, more calculatedly and with far less risk to the framer, by using a Trojan horse program.
Imagine a twenty-first century version of The Fugitive: Kimble's wife becomes ill so he takes her to the hospital, where she dies; the autopsy shows she died of ricin poisoning. As in the series, Kimble and his wife had been fighting; the evidence of marital discord encourages the police to take him seriously as a suspect in her death. Police obtain a search warrant, seize the computer in their home and search it. On its hard drive, they find evidence (downloaded data, evidence of Internet searches) that Kimble researched the toxicity of ricin poisoning and the processes used to extract ricin from castor beans. (They might also find ricin in the house somewhere, maybe in a place Kimble uses.) This would be enough to charge him with his wife's death (absent other contravening facts) and probably enough to convict him (absent a compelling defense).
In this scenario, Kimble could try asserting a THD to disclaim responsibility for the research into ricin poisoning, but the THD would not be as effective here as it could be in a "pure" cybercrime case. Here, a Trojan horse program is being used, in part, to frame someone for a real-world crime, murder. The potential for persuading the jury (correctly, in this instance) that someone used a Trojan horse program to put the ricin data on the computer as part of a larger plot to frame Kimble for his wife's death would be undermined by that fact because the jurors would be likely to concentrate on the real-world aspects of the crime (death, fighting, ricin, opportunity, etc.) and use their common sense (no one said it's infallible) to conclude that he did it.
I could go on, but I hope I've made my point. The Trojan horse defense is a two-edged sword: It can be used by guilty parties seeking to avoid being held liable for what they have done; but it can also be used to frame the innocent.
The last feature is what I want to talk about today. It's given rise to what is called the "Trojan horse defense." A friend and I wrote a law review article analyzing how prosecutors can rebut the defense. (Susan Brenner, Brian Carrier & Jef Henninger, The Trojan Horse Defense in Cybercrime Cases, 21 Santa Clara Computer and High Technology Law Journal 1 (2004)). The article focuses both on legal arguments and technical issues a prosecutor facing the defense can use to rebut it. It goes into a great deal of detail -- today, I want to talk generally about the Trojan horse defense (THD) and some of the issues it raises.
The THD became notorious in 2003, when Aaron Caffrey used in the United Kingdom. Caffrey was charged, basically, with hacking into the Port of Houston computers and causing them to shut down. His defense attorney conceded the attack came from Caffrey's laptop computer, but claimed Caffrey was not responsible for the attack, that he had, in effect, been "framed" by other hackers who installed Trojan horse programs on his laptop and used them to attack the Port of Houston computers. In an effort to rebut this defense, the prosecution pointed out that no trace of Trojan horse programs had been found on the laptop; the defense countered by explaining that the Trojan hourse programs had been "self-erasing" Trojans, so no trace would remain. The jury clearly bought the defense's argument, as it acquitted Caffrey.
This was not the first instance in which the THD had been used in the UK, but the Caffrey case received far more publicity than the earlier instance(s) in which the defense was raised. News stories pointed out that Caffrey's defense raised serious challenges for prosecutors. As one observer noted, the "case suggests that even if no evidence of a computer break-in is unearthed on a suspect's PC, they might still be able to successfully claim that they were not responsible for whatever their computer does, or what is found on its hard drive." And others pointed out that someone could establish the factual basis for such a defense by having Trojan horse programs on their computer.
As we note in the article, the THD is a new version of a very old defense: the SODDI defense (as it is known in the U.S.). SODDI stands for "some other dude did it." When a defendant raises a SODDI defense, he (or she) concedes that a crime was committed but blames someone else for its commission. The SODDI defense is usually not very successful in real-world prosecutions (the O.J. Simpson case is a major exception). When a defendant raises a SODDI defense in a prosecution for a traditional, real-world crime -- like, say, murder or rape -- he claims the crime was committed by an unknown someone else. Jurors tend to be skeptical of claimes like this, especially if, as is usually the case, the prosecution is able to link the defendant to the crime by showing motive, opportunity and/or incriminating evidence that is in his possession or can be traced to him (DNA, fingerprints, etc.). Jurors are skeptical of claims like this because they understand how the real-world works.
The SODDI defense has been much more successful in cybercrime cases because they involve a context which most jurors don't really understand, or understand enough to buy defense claims like Caffrey's contention about being framed by self-erasing Trojan horse programs.
(I'm not a technically trained person, so I cannot opine on the likelihood of self-erasing Trojans. I know people who are technically trained who do not believe they exist. If they do not exist now, I assume they will at some point, so I don't see this as a particularly important issue, at least not for the prosecution.)
In cybercrime cases, the SODDI defense turns the tables on the prosecution: In a criminal case, the prosecution has the burden of proving all the elements of the crime beyond a reasonable doubt and the defense has the burden of proving an affirmative defense by a preponderance of the evidence.
- The preponderance standard is much lower than the standard the prosecution must meet, but it ensures that the defense cannot present some purely frivolous theory to the jury.
- Affirmative defenses concede that a crime has been committed by assert there is some reason why the defendant should not be held liable for it, such as that the defendant is insane or that he acted in self-defense.
If a Trojan horse program is found on a defendant's computer, that would provide the factual basis for getting the defense to the jury . . . that along with testimony which establishes what a Trojan horse program is and what it does. Once the defense does this, the ball is now in the prosecution's court: The prosecution must rebut the defense, which means it must prove beyond a reasonable doubt that it was the defendant -- not Some Other Dude Using a Trojan Horse -- who committed the crim(s) charged. This is where the difficulty arises.
The prosecution now is obligated to prove a negative: that it was not Some Other Dude Using a Trojan Horse program who hacked the Port of Houston, collected child pornography or committed some other cybercrime. Proving a negative can be difficult, especially in this context.
As opposed to instances in which a defendant raises a SODDI defense in a real-world criminal case, the prosecution cannot rely on the jury's ability to use their common sense to assess the merits of and then reject the defense as implausible because the defense is grounded in what is still, for many, a distinctly "uncommon" context: the virtual environment of computes, hard drives and cyberspace. Some jurors may know nothing about technology, which really gives them no conceptual framework to use in judging the merits of a THD. This, I think, makes them something of a wild card; their decision to go with the prosecution or the defense may be made arbitrarily, a juror's equivalent of flipping a coin.
Other jurors may know a little about technology, enough to know what viruses are and to have a general idea of what they can do. As far as the prosecution is concerned, a little knowledge may be a dangerous thing: These jurors may understand enough about technology to be willing to believe that Trojan horses (and other types of malware) can do things they may not be able to do at all, or may not have been able to do given the facts in the case before them.
(I'm not sure where I come out on jurors who know a lot about technology. They might be able to analyze and reject the factual foundation of a shaky/untenable THD or they might over-analyze the evidence presented and so buy into the defense. I guess one reason I am not sure where I come out on these jurors is that I think they are likely to be very scarce in the jury pool.)
Assuming, as I think is reasonable, that the jury is made up of people with little or no knowledge of technology, how does the prosecution rebut the defense's presentation of a THD? It seems that the prosecution will have to dissect the technical basis of the defense to do so; the Caffrey prosecution showed that no Trojan horses were on Caffrey's laptop, and asked the jury to infer from this that it was Caffrey, not a Trojan horse program being used by someone else, who shut down the computers at the Port of Houston.
But if Trojan horses are found on the suspect's computer, the prosecution will have to get into the specifics of technology -- its capabilities and limitations -- to rebut the THD. This, I think, creates real difficulties for prosecutors, because it requires that they be able to explain abtruse, technical concepts and processes to a lay jury in a way laypeople can understand and can use that understanding to conduct a critical assessment of the THD presented to them. That can be a very difficult process; it will require, I think, not only expert witnesses, but the skillful use of graphics -- animations, diagrams, maybe physical exhibits -- that can really let jurors grasp what would have had to occur for the THD to be valid and why that did not occur (establishing, by inference, that the THD defense is invalid). Doing all that can be a huge undertaking for the average prosecutor/prosecutor's office, as it requires time, expertise and the money to pay for the creation of the necessary demonstrative evidence (animations, diagrams, etc.).
For now, I suspect the defense enjoys the advantage with regard to the THD, which is why I am surprised that we have not seen it used more in this country (it still seems be be used, often successfully, in the United Kingdom).
The only American case I know of in which it has been used successfully is an Alabama state tax fraud/tax evasion prosecution against Eugene Pitts, a Hoover, Alabama accountant. Pitts was accused of underreporting income on his tax returns for 1997, 1998 and 1999. He admitted there were errors on his returns for those years, but blamed the errors on a computer virus. Although prosecutors pointed out that the alleged virus did not affect the client tax returns Pitts prepared on the same computer, the jury acquitted him of all charges after deliberating for 3 hours . . . another "Caffrey verdict."
I assume the infrequency with which a THD is used in this country has something to do with the defense bar's familiarity, or unfamiliarty, with technology. Other than that, I cannot imagine why it does not show up more often, especially given the frequency with which the real-world variant of the SODDI defense is used.
Everything I have said in this post has been directed at the prosecution's burden and ability to rebut a THD defense. Everything I have said so far implicitly assumes that the invocation of the defense is frivolous as it was, IMHO, in the Caffrey and Pitts cases. And I think that is likely to be true in many (most?) of the cases in which a THD is used.
It will not, however, be true in every case. As people knowledgeable about computer technology will tell you, a Trojan horse program could easily be used to frame someone for a crime. While it seems exceedingly unlikely ("incredible") that a Trojan horse program could put 15,000 images of child pornography sorted into folders and sub-folders on someone's hard drive without their knowing it, a Trojan horse could be used to frame someone for, fraud, embezzlement or other crimes, even murder.
Think about it: Do you know everything that is on your hard drive . . . every file folder, every file? I can't imagine that you do, given the amount of data most of us acquire. And how many of us ever check to see what, exactly, is on our hard drive? Maybe other people do; I don't (I hope I am not inviting someone to frame me by admitting that . . . ).
The possibility makes me think of the old TV series, The Fugitive. In the TV series (and in the movie), Dr. Richard Kimble is adventitiously framed by the one-armed man who kills Kimble's wife. Kimble's SODDI defense (asserting that the mysterious one-armed man, whom only he saw, killed his wife) fails, and he is convicted of the crime. The same thing could be done, more calculatedly and with far less risk to the framer, by using a Trojan horse program.
Imagine a twenty-first century version of The Fugitive: Kimble's wife becomes ill so he takes her to the hospital, where she dies; the autopsy shows she died of ricin poisoning. As in the series, Kimble and his wife had been fighting; the evidence of marital discord encourages the police to take him seriously as a suspect in her death. Police obtain a search warrant, seize the computer in their home and search it. On its hard drive, they find evidence (downloaded data, evidence of Internet searches) that Kimble researched the toxicity of ricin poisoning and the processes used to extract ricin from castor beans. (They might also find ricin in the house somewhere, maybe in a place Kimble uses.) This would be enough to charge him with his wife's death (absent other contravening facts) and probably enough to convict him (absent a compelling defense).
In this scenario, Kimble could try asserting a THD to disclaim responsibility for the research into ricin poisoning, but the THD would not be as effective here as it could be in a "pure" cybercrime case. Here, a Trojan horse program is being used, in part, to frame someone for a real-world crime, murder. The potential for persuading the jury (correctly, in this instance) that someone used a Trojan horse program to put the ricin data on the computer as part of a larger plot to frame Kimble for his wife's death would be undermined by that fact because the jurors would be likely to concentrate on the real-world aspects of the crime (death, fighting, ricin, opportunity, etc.) and use their common sense (no one said it's infallible) to conclude that he did it.
I could go on, but I hope I've made my point. The Trojan horse defense is a two-edged sword: It can be used by guilty parties seeking to avoid being held liable for what they have done; but it can also be used to frame the innocent.
Tuesday, June 13, 2006
E-hijacking
According to a story posted on FleetOwner late last year, a shipment of computer tapes containing banking records belonging to Citigroup was "e-hijacked" as it was in transit to an Experian credit bureau in Texas.
The tapes were being shipped via UPS but, the story says, were diverted in transit. It says the shipment's electronic manifest was altered while the shipment was in transit, so that the tapes were delivered to an address other than the address for which they were destined. The story also says that the manifest was restored to its original form after the tapes were delivered, to make it appear that standard procedures had been followed. It does not explain how the alteration and mis-delivery were discovered.
A couple of months later, UPS issued a denial, saying that the tapes were not e-hijacked. UPS maintained that the boxes containing the tapes broke open in transit and the contents were "inadvertently thrown away."
I tend to suspect that the original story was true and that the tapes were, in fact, e-hijacked, but I tend to be cynical about these things. And I could be wrong -- it's been known to happen.
If something like this did happen, it would raise some interesting legal issues, so let's assume, for the sake of discussion, that things went as the original story said -- that someone altered the UPS manifest and the tapes were mis-delivered to, say, a warehouse where people responsible for the alteration took delivery of them and then disappeared. It would be quite easy to rent or "borrow" a warehouse for this purpose, I suspect.
Some of the stories about this reported e-hijacking focused on how it was done, on whether it took an army of "outside" hackers to alter the manifest or whether it was done by an "insider" who might or might not have had some outside help. I tend to lean to the "insider" theory, if only because it would be much simpler, much cleaner than having to mount an external attack. If I were doing something like this (which, of course, I would not, but it is always interesting to play crook in one's mind), I would use an insider because I think that would minimize the chance of the alteration's being spotted. If you used outside hackers, their efforts to crack the system and their ultimate success in doing so might be noticed, might call attention to what was going on. I'd definitely go with the insider, myself . . . but that is not what I want to talk about.
Being a lawyer, I am fascinated by what, if any, "crime" our hypothetical e-hijackers would have committed.
The original story assumed that this ehijacking constituted "theft," but I am not so sure. Legally, "theft" consists of taking someone's property with out their consent; theft statutes often note that the thief takes the property with the intent to deprive the owner of its possession and use. We more commonly refer to theft as "stealing."
Here, our (hypothetical) e-hijackers did not take the property from anyone without consent. They (hypothetically) took the tapes from UPS, to which Citigroup had given them for the purposes of shipment; that is what's called a bailment, and it basically means that UPS stands in Citigroup's shoes. So, if our hypothetical hijackers had pulled a "Sopranos"-style hijacking and had men with guns stop the truck, order the driver out, order him to open the cargo area, held him at bay with rifles and then go into the truck and take the boxes with the tapes over his protests, we would have "theft."
That's not what happened. What happened (hypothetically) is that UPS consensually handed the tapes over to our (hypothetical) e-hijackers, not realizing that they were being mis-delivered. (The premise of the original story is that UPS thought it was delivering the tapes to Experian, which was their legitimate destination.) It's not stealing if you voluntarily hand over property to someone who, unbeknownst to you, is not authorized to receive it.
English common law had to deal with this problem many centuries ago: People being charged with theft made a similar argument when what they had done was to trick the victim into giving them property/money -- the twelfth-century version of selling the Brooklyn Bridge. Courts finally recognized that, in fact, this was not theft . . . but they still perceived it as being "wrong." So they created a new kind of crime: larceny-by-trick (or theft-by-trick), which has come down to us as "fraud." Like the thief, the fraudster gets property to which he/she is not legitimately entitled; unlike the thief, the fraudster does not take the property but, instead, convinces the victim to hand it over. So, it seems more likely that our (hypothetical) e-hijackers committed fraud.
Who, though, did they defraud? We said above that since Citigroup gave the tapes to UPS for the purpose of delivering them to Experian, UPS essentially stood in Citigroup's shoes while it had the tapes. That is, UPS effectively represented the "owner" of the property while it was in transit to Experian. Okay, that tells us who the "owner" of the property was at the moment it was (hypothetically) diverted from Experian to the (hypothetical) e-hijackers. But who, precisely, did they defraud? Who did they trick? The deception that, at least hypothetically, resulted in the transfer of possession of the tapes from UPS to the e-hijackers was not directed at a person; it was directed at a computer, more specifically, at the UPS computer which issued/processed the manifest for the shipment. If this story were true, and if, as seems unlikely, the e-hijackers were apprehended, would it be permissible to charge them with fraud based on their having deceived a computer?
Logically, I see no reason why we could not construct such a charge . . . but I suspect that if we did so, the defendants would move to dismiss, arguing that the law is and always has been that "fraud" consists of deceiving a person so that person hands property over to the fraudster. I'm not sure, at this point, that we actually need to revise our fraud laws to encompass this scenario, but I think it is an issue we might want to consider . . . because if e-hijacking really did not occur in this instance, it will.
The tapes were being shipped via UPS but, the story says, were diverted in transit. It says the shipment's electronic manifest was altered while the shipment was in transit, so that the tapes were delivered to an address other than the address for which they were destined. The story also says that the manifest was restored to its original form after the tapes were delivered, to make it appear that standard procedures had been followed. It does not explain how the alteration and mis-delivery were discovered.
A couple of months later, UPS issued a denial, saying that the tapes were not e-hijacked. UPS maintained that the boxes containing the tapes broke open in transit and the contents were "inadvertently thrown away."
I tend to suspect that the original story was true and that the tapes were, in fact, e-hijacked, but I tend to be cynical about these things. And I could be wrong -- it's been known to happen.
If something like this did happen, it would raise some interesting legal issues, so let's assume, for the sake of discussion, that things went as the original story said -- that someone altered the UPS manifest and the tapes were mis-delivered to, say, a warehouse where people responsible for the alteration took delivery of them and then disappeared. It would be quite easy to rent or "borrow" a warehouse for this purpose, I suspect.
Some of the stories about this reported e-hijacking focused on how it was done, on whether it took an army of "outside" hackers to alter the manifest or whether it was done by an "insider" who might or might not have had some outside help. I tend to lean to the "insider" theory, if only because it would be much simpler, much cleaner than having to mount an external attack. If I were doing something like this (which, of course, I would not, but it is always interesting to play crook in one's mind), I would use an insider because I think that would minimize the chance of the alteration's being spotted. If you used outside hackers, their efforts to crack the system and their ultimate success in doing so might be noticed, might call attention to what was going on. I'd definitely go with the insider, myself . . . but that is not what I want to talk about.
Being a lawyer, I am fascinated by what, if any, "crime" our hypothetical e-hijackers would have committed.
The original story assumed that this ehijacking constituted "theft," but I am not so sure. Legally, "theft" consists of taking someone's property with out their consent; theft statutes often note that the thief takes the property with the intent to deprive the owner of its possession and use. We more commonly refer to theft as "stealing."
Here, our (hypothetical) e-hijackers did not take the property from anyone without consent. They (hypothetically) took the tapes from UPS, to which Citigroup had given them for the purposes of shipment; that is what's called a bailment, and it basically means that UPS stands in Citigroup's shoes. So, if our hypothetical hijackers had pulled a "Sopranos"-style hijacking and had men with guns stop the truck, order the driver out, order him to open the cargo area, held him at bay with rifles and then go into the truck and take the boxes with the tapes over his protests, we would have "theft."
That's not what happened. What happened (hypothetically) is that UPS consensually handed the tapes over to our (hypothetical) e-hijackers, not realizing that they were being mis-delivered. (The premise of the original story is that UPS thought it was delivering the tapes to Experian, which was their legitimate destination.) It's not stealing if you voluntarily hand over property to someone who, unbeknownst to you, is not authorized to receive it.
English common law had to deal with this problem many centuries ago: People being charged with theft made a similar argument when what they had done was to trick the victim into giving them property/money -- the twelfth-century version of selling the Brooklyn Bridge. Courts finally recognized that, in fact, this was not theft . . . but they still perceived it as being "wrong." So they created a new kind of crime: larceny-by-trick (or theft-by-trick), which has come down to us as "fraud." Like the thief, the fraudster gets property to which he/she is not legitimately entitled; unlike the thief, the fraudster does not take the property but, instead, convinces the victim to hand it over. So, it seems more likely that our (hypothetical) e-hijackers committed fraud.
Who, though, did they defraud? We said above that since Citigroup gave the tapes to UPS for the purpose of delivering them to Experian, UPS essentially stood in Citigroup's shoes while it had the tapes. That is, UPS effectively represented the "owner" of the property while it was in transit to Experian. Okay, that tells us who the "owner" of the property was at the moment it was (hypothetically) diverted from Experian to the (hypothetical) e-hijackers. But who, precisely, did they defraud? Who did they trick? The deception that, at least hypothetically, resulted in the transfer of possession of the tapes from UPS to the e-hijackers was not directed at a person; it was directed at a computer, more specifically, at the UPS computer which issued/processed the manifest for the shipment. If this story were true, and if, as seems unlikely, the e-hijackers were apprehended, would it be permissible to charge them with fraud based on their having deceived a computer?
Logically, I see no reason why we could not construct such a charge . . . but I suspect that if we did so, the defendants would move to dismiss, arguing that the law is and always has been that "fraud" consists of deceiving a person so that person hands property over to the fraudster. I'm not sure, at this point, that we actually need to revise our fraud laws to encompass this scenario, but I think it is an issue we might want to consider . . . because if e-hijacking really did not occur in this instance, it will.
Monday, June 05, 2006
C3: Cybercrime, cyberterrorism and cyberwarfare
I've written a lot about cybercrime and have done at least one post on cyber terrorism.
Today, I want to talk not about cybercrime or cyberterrorism as such, but about the three categories of online malefaction: cybercrime, cyberterrorism and cyberwarfare.
More specifically, I want to focus on the clear and not-so-clear distinctions between the categories.
Let's begin with some basic definitions:
In the real-world, we know who deals with what:
Even here, though, the categorization does not always hold: Al Qaeda and other terrorist groups have been known to use online fraud (especially credit card fraud) as a way to raise money for their terrorist activity. If terrorists are engaging in what would otherwise be cybercrime, is the activity still cybercrime or does it become cyberterrorism? I'd say it's still cybercrime because while it is being perpetrated by those who style themselves as terrorist, it is, at bottom, still just fraud.
I want, though, to focus on the problem I noted above: the challenge of initially identifying what type of cyberactivity is at issue and ensuring that the proper agencies/personnel respond to it.
Imagine, say, that a series of sequenced attacks occur on financial systems scattered around the U.S. We will simplify the example by assuming that each of the attacks takes the same form. (It would, of course, be relatively easy to structure the attacks so they differ in varying degrees.)
So, keeping things simple, let us assume that all/many/most ATM machines are taken off line (i) in Des Moines on April 1; (ii) in Portland on April 2; (iii) in Reno on April 3; (iv) in Cincinnati on April 5; (v) in Nashville on April 6; (vi) in Miami on April 7; and so on. The scenario might involve keeping the ATMs offline or it might involve shutting them down, bringing them back up and then shutting them down again (which I think might be more effective). This basis pattern could be coupled with other attacks on banking systems . . . online banking might be shut down, data might be scrambled, etc. etc.
Take that basic scenario: Who would respond (initially -- we'll get to escalating responses in a minute)? The local police would respond. It would presumably be regarded as a cybercrime -- maybe the stereotypical teenage hacker shutting down the system for fun, maybe a prelude to an extortion effort by professional hackers.
Assume, now, that the attack is not a cybercrime, that it is being perpetrated by those "hacker warriors" I mentioned earlier -- cyberwarriors trained and recruited by a nation-state, one that is hostile to the U.S. and that is using cyberspace in an effort to gain certain tactical advantages. Here, the tactical advantage might be an initial step toward destabilizing the financial system in the U.S.
How long would it take for us to realize we were under such an attack? How long would it take for us to realize that this was cyberwarfare, not cybercrime? How would that realization come to pass . . . if at all?
For that realization to occur, someone, somehow would have to be able to see the big picture, would have to know that these attacks were occuring, would have to see the sequencing in the attacks, would have to know about the similarity in the attacks. How would that come to pass?
What if the local police in each of the cities in which an attack occurred simply believed it was a cybercrime? What if the local police, assisted, maybe, by the state police, sought to deal with it on their own? I think this is the most likely scenario, at least for a considerable period of time.
I hope, but doubt, that we have procedures, personnel, and data-gathering processes in place that allow us to track incidents such as these at a global level . . . that, in other words, let us (one or more of us, official one or more of us, somewhere) grasp what is occuring on a larger scale.
Otherwise, we could become the target of cyberwarfare and not even know it. In the 1970s there was, I think, a slogan -- something like "What if they gave a war and no one came?" Maybe the slogan for the 21st century should be something like "What if they started a war and we didn't know it until they won?"
Today, I want to talk not about cybercrime or cyberterrorism as such, but about the three categories of online malefaction: cybercrime, cyberterrorism and cyberwarfare.
More specifically, I want to focus on the clear and not-so-clear distinctions between the categories.
Let's begin with some basic definitions:
- Cybercrime is, essentially, using computer technology to commmit unlawful acts, or crimes. As I explained in an earlier post here, and as I have explained elsewhere, the activity we refer to as cybercrime often consists of nothing more than using a computer to commit a crime that is probably as old, or almost as old, as humanity. So, if someone uses a computer and the Internet to siphon funds from a bank account belonging to someone else, it is simply theft (taking property from someone else without their consent) as far as the law is concerned. There are, however, good reasons to consider the perpetrator's use of computer technology in the commission of this and other technological crimes; aside from anything else, they let the perpetrator commit the crime remotely (the perpetrator is in, say, Brazil, the bank account is in the United States), which can make it difficult for law enforcement to "solve" the crime. Also, the use of computer technology can increase the scale on which crime is committed; so, an online fraudstater using computer technology can defraud many more people in a given space of time than she would be able to do if she had to deal with each of them face-to-face. Cybercrime, like all crime, is committed by civilians whose motives are purely their own. (There is an exception to this, which I will note below.)
- Cyberterrorism essentially consists of using computer technology to engage in terrorism. Terrorism consists of acts that are committed for political, versus economic, motives. Much of crime is committed for economic reasons, as in the examples I gave above. Terrorism is committed to further certain political goals. It is usually intended to demoralize a civilian population (which differentiates it from warfare, which is not supposed to target civilians), and usually accomplishes that, in the real-world, by destroying property and injuring or killing as many civilians as possible. The 911 attacks on the World Trade Center are a perfect example of real-world terrorism; they were intended to destroy a premier symbol of capitalism and, in so doing, undermine the morale and confidence of U.S. citizens. As I explained in an earlier post, we have not, as yet, seen cyberterrorism, but I am confident we will. I do not think, as I said in my earlier post, that cyberterrorism is an effective way to destroy property and human life on the scale and with the shocking simultaneity one can achieve by using bombs, airplanes and similar real-world methods. I do think, though, that computer technology can be used to erode citizen confidence in the security and stability of the internal systems upon which they rely. As I noted in my earlier post, one way to do this would be to launch sequenced, synchronized attacks shutting down ATM systems and other financial mechanisms in carefully selected cities around the United States. As the attacks progressed from city to city, it would become increasingly apparent that they were not random, were not the product of software bugs, were not otherwise explainabel but were, instead, the product of terrorist activity. Attacks such as these would not inflinct the sheer horror of the 911 attacks, but they could further terrorist goals by creating a climate of insecurity and anger at the government, something analogous to what we saw with the Katrina fiasco. Like terrorism, cyberterrorism is carried out by individuals who are part of a group that is held together by a commitment to a specific political ethos.
- Cyberwarfare is using computer technology to wage war. The distinguishing characteristic of war is that it is a struggle between nation-states; it is, like all human activity, physically carried out by individuals, but those individuals are acting for a particular nation-state. Like terrorism, warfare tends to result in the destruction of property (often on a massive scale) and in the injury and deaths of individuals (often many, many individuals). Unlike terrorism, war is supposed to be limited to clashes between the aggregations of individuals (armies) who respectively act for the warring nation-states, their armies. Injuring and killing civilians (those who are not serving in one of the combatant nation-states' armies) occurs, but it, like most property damage/destruction, is supposed to be a collateral event. The primary focus of war in general and of particular wars in specific is to "triumph" over the adversarial nation-state(s) (whatever that means in a given context). Inflicting injury/death on civilians and destroying property is not the primary focus of warfare. Cyberwarfare (also known as "information warfare") is a logical consequence of migrating much of human activity into cyberspace. Several years ago, the Department of Defense defined cyberwarfare as "actions taken to achieve information superiority by affecting adversary information, information-based processes, information systems, and computer-based networks while defending one's own" computer systems, information, etc. More simply, cyberwarfare consists of using cyberspace to achieve the same general ends nation-states pursue via the use of conventional military force; that is, the use of cyberspace to achieve certain advantages over a competing nation-state or to prevent a competing nation-state from achieving advantages over another state. As I write this, it is clear that many nation-states are already engaging in cyberwarfare, though on what I think is a relatively small scale. Some countries are training/have already trained "hacker warriors" and are using them to mount attacks on other countries, many of which are developing their own cyberwarfare capabilities. From what I can tell, most of the attacks so far resemble skirmishes rather than full-scale "cyber-battles" (whatever a full-scale cyber-battle would look like . . . . )
In the real-world, we know who deals with what:
- Law enforcement officers (in the U.S. local police, state police and, sometimes, federal agents) deal with crime.
- Law enforcement officers plus, perhaps, specialized law enforcement officers (the FBI in the U.S., specialized police units in other countries) deal with terrorism. Usually, you tend to see a mix of "regular" and "specialized" police responding to terrorism because the local police are likely to be the first responders to a terrorist incident . . . as we saw with the 911 attacks on the World Trade Center. There, the NY police and fire departments were the first to deal with the attacks, though the FBI and related federal agencies quickly became involved, as well.
- The military deals exclusively with warfare.
- It's generally not difficult to do that when we are dealing with real-world activity: Crime is pretty easy to spot, especially since much of it tends to be one-on-one crime, e.g., one person robs another, one person kills another, etc. And crime falls into identifiable categories: theft, robbery, rape, murder, fraud, arson, etc.
- Real-world terrorism is generally easy to spot, even though it involves activity that can also fall within the definition of crime, i.e., harming/killing people and destroying property. Real-world terrorism is usually easy to distinguish from crime because (i) it is irrational and (ii) the scale on which it is committed vastly exceeds what one usually encounters with crime.
- Take the attacks on the World Trade Center, for example: They are irrational in the sense that they produced no financial gains (unlike, say, bombing party of one of the WTC towers and using that to rob a bank or a jewelry store, say). Much of crime, as I have said before, is committed for financial gain.
- There are, however, crimes that are not committed for financial gain; in any city in the U.S. (or elsewhere) one can read daily about murders that were committed for no rational reason, for no purpose relating to financial gain or the achievement of other rational ends (like ridding oneself of an unwanted spouse). But those crimes tend to be limited in scale, and tend to involve people who know each other. Husbands kill wives, wives kill husbands, employees "go postal" and kill people in their workplace. In crimes such as these, there is a link, a factual nexus between the perpetrator and the victims. They also tend to be limited in scale: The perpetrator kills only the person(s) he/she knows and is angry/frustrated with.
- In real-world terrorism, the activity is not rational -- why would anyone fly a plane into the World Trade Center? There is no ostensibly rational motive; the motivations of the Al Qaeda members who actually did that are, of course, quite rational if one accepts the ideological premises from which they operate. To the uninitiated, however, the conduct seems irrational. So, there is a clue that we are dealing with terrorism . . . just as the apparent irrationality of the conduct is clear when a suicide bomber blows up himself/herself and whoever happens to be in the area. That second factor is another differentiating factor, another clue, that we are dealing with terrorism in the real-world: The scale is inexact -- there is no clear link between the act and the result; the suicide bomber blows up some random number of people, none of whom he/she knows, none of whom he/she has any personal grudge against.
- I could go on, but I think (hope) my point is clear -- it is relatively easy to identify terrorism in the real-world.
- Finally, it is very easy to identify warfare in the real-world. When the Japanese bombed Pearl Harbor or when the U.S. began bombing in Iraq in 2003, no one who heard about/witnessed the attacks could have the slightest doubt that this was warfare . . . not crime, not terrorism. Both were conducted by specialized cadres of individuals associated with the attacking nation-state, all of whom wore distinctive attire and distinctive insignia.
Even here, though, the categorization does not always hold: Al Qaeda and other terrorist groups have been known to use online fraud (especially credit card fraud) as a way to raise money for their terrorist activity. If terrorists are engaging in what would otherwise be cybercrime, is the activity still cybercrime or does it become cyberterrorism? I'd say it's still cybercrime because while it is being perpetrated by those who style themselves as terrorist, it is, at bottom, still just fraud.
I want, though, to focus on the problem I noted above: the challenge of initially identifying what type of cyberactivity is at issue and ensuring that the proper agencies/personnel respond to it.
Imagine, say, that a series of sequenced attacks occur on financial systems scattered around the U.S. We will simplify the example by assuming that each of the attacks takes the same form. (It would, of course, be relatively easy to structure the attacks so they differ in varying degrees.)
So, keeping things simple, let us assume that all/many/most ATM machines are taken off line (i) in Des Moines on April 1; (ii) in Portland on April 2; (iii) in Reno on April 3; (iv) in Cincinnati on April 5; (v) in Nashville on April 6; (vi) in Miami on April 7; and so on. The scenario might involve keeping the ATMs offline or it might involve shutting them down, bringing them back up and then shutting them down again (which I think might be more effective). This basis pattern could be coupled with other attacks on banking systems . . . online banking might be shut down, data might be scrambled, etc. etc.
Take that basic scenario: Who would respond (initially -- we'll get to escalating responses in a minute)? The local police would respond. It would presumably be regarded as a cybercrime -- maybe the stereotypical teenage hacker shutting down the system for fun, maybe a prelude to an extortion effort by professional hackers.
Assume, now, that the attack is not a cybercrime, that it is being perpetrated by those "hacker warriors" I mentioned earlier -- cyberwarriors trained and recruited by a nation-state, one that is hostile to the U.S. and that is using cyberspace in an effort to gain certain tactical advantages. Here, the tactical advantage might be an initial step toward destabilizing the financial system in the U.S.
How long would it take for us to realize we were under such an attack? How long would it take for us to realize that this was cyberwarfare, not cybercrime? How would that realization come to pass . . . if at all?
For that realization to occur, someone, somehow would have to be able to see the big picture, would have to know that these attacks were occuring, would have to see the sequencing in the attacks, would have to know about the similarity in the attacks. How would that come to pass?
What if the local police in each of the cities in which an attack occurred simply believed it was a cybercrime? What if the local police, assisted, maybe, by the state police, sought to deal with it on their own? I think this is the most likely scenario, at least for a considerable period of time.
I hope, but doubt, that we have procedures, personnel, and data-gathering processes in place that allow us to track incidents such as these at a global level . . . that, in other words, let us (one or more of us, official one or more of us, somewhere) grasp what is occuring on a larger scale.
Otherwise, we could become the target of cyberwarfare and not even know it. In the 1970s there was, I think, a slogan -- something like "What if they gave a war and no one came?" Maybe the slogan for the 21st century should be something like "What if they started a war and we didn't know it until they won?"
Subscribe to:
Posts (Atom)