Friday, January 30, 2009

"Without Authorization"

As I’ve noted before, there are two different kinds of computer hacking (or computer trespass) crimes: Accessing a computer without being authorized to do so (outsider attack) and exceeding the scope of one’s authorized access to a computer (insider attack).

As I’ve also noted, crimes that fall into the first category are usually factually unambiguous; it’s usually pretty easy to prove that an outsider improperly gained access to a computer or computer system.


As I explained in an earlier post, the crimes that fall into the second category can be very factually ambiguous. Here, we’re talking about someone who is legitimately authorized to access a computer system but who goes “too far,” who uses the system in a way that is – explicitly or implicitly – not within the scope of his or her authorization. This post is about a recent federal case that addressed this precise issue.

It’s a civil case: Condux International, Inc. v. Haugum, 2008 WL 5244818 (U.S. District Court for the District of Minnesota). As I’ve noted, 18 U.S. Code § 1030, the basic federal cybercrime statute, creates a civil cause of action for one who has been damaged by a violation of the statute. 18 U.S. Code § 1030(g). In this case, Condux sued Haugum for violating several provisions of §1030, based on these facts:
Condux . . . manufactures and installs tools and equipment used in the electrical utility, electrical contracting, telecommunications, and cable television industries. Haugum . . . worked for Condux . . . as Vice President of Global Sales. As vice president, Haugum was responsible for overseeing sales and marketing for the company, and accordingly, was authorized to access `confidential business information’ (such as Condux's customer lists, pricing and sales data, profit-margin data, and engineering drawings of Condux's products) stored on Condux's computer system. Condux's employee handbook provides that confidential business information owned by Condux is not to be misappropriated by employees for their own personal benefit.

In November 2007, Haugum exchanged emails with a former Condux employee indicating [he] was considering quitting his job and starting his own competing business. Condux alleges that in December 2007, Haugum requested that an employee in Condux's information technology department send him an electronic list of Condux's customers and their contact information. Also, Condux asserts, Haugum downloaded over forty engineering drawings from Condux's computer system in January 2008. Soon thereafter, Haugum announced his resignation and left Condux on February 15, 2008.

Condux asserts that since Hagum's departure, it has learned [he]`attempted to delete evidence of his download of the engineering drawings’ and discovered a document drafted by Haugum that included a resolution to develop a business to compete with Condux. Condux alleges further that Haugum has (1) approached one of Condux's distributors about doing business directly with Haugum; (2) exchanged emails with a former Condux employee in which the former employee agreed to send Condux's confidential business information to Haugum; and (3) `directed’ the former employee to delete evidence of those emails and the accompanying transfer of confidential business information. Condux claims that Haugum's activities in obtaining the confidential business information were wrongful . . . and that Condux has suffered damages as a result.
Condux v. Haugum, supra.

Condux specifically alleged that Haugum violated 18 U.S. Code §§ 1030(a)(2)(C), 1030(a)(4) and 1030(a)(5)(A). Haugum filed a motion to dismiss the suit, claiming that Condux had not adequately stated a claim for damages under any of these sections.

The federal judge began her analysis of his motion to dismiss by explaining that a
violation of subsection (a)(2)(C) occurs when a person intentionally accesses a computer without authorization or in excess of authorized access and thereby obtains information from a `protected computer if the conduct involved an interstate or foreign communication. . . . Subsection (a)(4) is violated if a person knowingly and with intent to defraud, accesses a protected computer without authorization or in excess of authorized access and by means of such conduct obtains anything of value. . . . And a violation of subsection (a)(5)(A) occurs when a person intentionally accesses a protected computer without authorization and as a result of such conduct causes or recklessly causes damage. . . . Thus, violations of subsections (a)(2) and (a)(4) require allegations that Haugum accessed Condux's computers either without authorization or in excess of authorized access, while violations of subsection (a)(5)(A)(ii) and (iii) require an allegation that Haugum accessed a protected computer without authorization
Condux v. Haugum, supra. Haugum argued that Condux didn’t have a valid claim under any of the sections because
his position as vice president `authorized’ him to access Condux's computer system and specifically to access the confidential business information and, therefore, Condux is unable to allege that he acted without authorization or in excess of authorized access. Condux does not dispute that Haugum was permitted to access the confidential business information; instead, Condux contends that Haugum was without authorization or exceeded his authorized access because he was `never authorized . . . to access its computer system to misappropriate confidential business information for his personal competitive use.’ In other words, Haugum was without authorization or exceeded his authorized access because of his wrongful intended use of the confidential business information.
Condux v. Haugum, supra.

Courts are divided on this issue. Some have found that an employee acts without authorization or exceeds authorized access when he accesses confidential or proprietary business information from his employer's computers which he is authorized to access but then uses that information in a manner that is inconsistent with the employer's interests or violates contractual obligations or fiduciary duties Other courts have taken a narrower view and held that §1030 is implicated only by the unauthorized access, obtainment, or alteration of information, not the misuse or misappropriation of information obtained with permission. Condux v. Haugum, supra. The Condux judge decided the latter position is the correct one:


The interpretation advanced by Condux . . . focuses on what a defendant did with the information after he accessed it (use of information), rather than on the appropriate question of whether he was permitted to access the information in the first place (use of access). Had Congress intended to target how a person makes use of information, it would have explicitly provided language to that effect. . . . [O]ne need look no further than another subsection of § 1030 to see explicit language that targets a person's use of information. See 18 U.S.C. § 1030(a)(1) (prohibiting the access without authorization or in excess of authorized access and subsequent `communicat[ion], deliver[y], or transmi [ssion]’ of certain information.) Thus,`the plain language of [subsections (a)(2), (a)(4), and (a)(5)(A)(ii) and (iii) ] target “unauthorized procurement or alteration of information, not its misuse or misappropriation.’“
Condux v. Haugum, supra.

The judge therefore granted Haugum’s motion to dismiss Condux’s § 1030 claims, noting that there was “no dispute that Haugum, as Vice President of Global Sales, was permitted to access Condux's computers. Therefore, he was not `without authorization’ when he accessed the computers. Additionally, because he was permitted to access the specific confidential business information, he did not `exceed authorized access.’” Condux v. Haugum, supra. The judge also noted that this did not leave Condux without recourse because it could pursue state law claims such as misappropriation of trade secrets, misappropriation of confidential business information, breach of fiduciary duties, and unfair competition. Condux v. Haugum, supra.

I think this judge got it right, given how § 1030 is currently written. And I’m not sure I would want to see it revised to make it encompass Condux’s theory of wrongful access. It seems to me the more logical option, if we decide any changes are needed, would be to make the basic act of gaining unauthorized access to a computer system or exceeding authorized access to such a system a crime (and civil cause of action); then we could make it a more serious crime (an aggravated wrongful access crime) to do that AND use the data accessed in a manner that is harmful to the rightful owner of the computer system. That would preserve the notion of wrongful access as digital trespass while still giving the statute a way to address egregious instances of wrongful access.

Wednesday, January 28, 2009

Authentication and the Erased Hard Drive

As I explained in an earlier post, evidence must be authenticated before it can be admitted in a trial or other court proceeding. That is, the party offering the evidence (the proponent) must show that it is what it purports to be.

Rule 901(a) of the Federal Rules of Evidence governs authentication in federal cases. It says the “requirement of authentication . . . as a condition precedent to admissibility is satisfied by evidence sufficient to support a finding that the matter in question is what its proponent claims” it to be. Rule 901(b) gives some examples of how evidence can be authenticated: testimony by someone who can identify it; an expert’s comparing it with “specimens which have been authenticated:’ distinctive characteristics; public records; or any other method prescribed by law. Every state has a similar rule that governs authentication in court proceedings in that state.

This post is about a case in which the issue of authentication became way more complicated than it needed to be. The case is State v. Ross, 2009 WL 118958 (Ohio Court of Appeals 2009), and it arose from these facts:
Officer Rob Kohli . . . posed as a fourteen-year-old girl who lived in Lima, Ohio on the internet with the screen name `sarah2hot420.’ Ross, . . . who was residing in Texas at that time, engaged in on-line chats with sarah2hot420. Some of their conversations involved discussions of various sexual activities, which included Ross asking sarah2hot420 if she would engage in oral sex . . . with him.

In January of 2007, Ross arranged a meeting with sarah2hot420 telling her he was going to be in Lima for business and wanted to engage in sexual activities with her while he was in town. Ross and sarah2hot420 agreed to meet at the Taco Bell on Shawnee Road in Allen County, Ohio. Ross arrived at the scheduled time, driving a vehicle that matched the description he had given sarah2hot420. Subsequently, Ross was arrested and taken into custody.
State v. Ross, supra. Ross was indicted on one count of importuning (soliciting a child to have sex with him) and one count of attempting to have sex with a minor. He pled not guilty and his defense lawyer served the prosecution with discovery requests to learn more about the evidence against Ross. The state responded with a written summary
of Ross' statements to law enforcement officers and a copy of the transcript of the conversation between Ross and the police officer who had posed as sarah2hot420. On April 10, 2007, Ross filed a motion to expand his discovery request, seeking copies of his computer hard drive and the law enforcement computer hard drive, as well as records concerning logs, testing and maintenance records of the police department. The State filed a response objecting to copying the entire hard drive of the police department's computer arguing that it contained other law enforcement information which was not discoverable.
State v. Ross, supra. The parties seemed to resolve the discovery issues at a June, 2007 hearing, but on October 2, 2007, Ross filed a motion to produce the
police department's hard drive based upon his expert witness being unable to authenticate the transcript of the online conversation. On October 15, the State responded stating that the hard drive had been erased due to computer problems and asked the trial court to overrule the motion or . . . conduct an in camera inspection of the hard drive to determine whether any relevant contents remained and were discoverable.
State v. Ross, supra. On January 17, the trial court issued this ruling:
[T]he State claims the direct evidence, i.e. the police hard drive, no longer exists so there is not direct evidence of the conversation through which defense can verify the accuracy of the printed transcript. However, there is the testimony of Kohli. . . and apparently, a copy of defendant's hard drive that could be used to verify whether the transcript is accurate. . . . [A]ccording to the discovery responses filed, there are also alleged statements of defendant that verify the contents of the transcript or at least the alleged criminal nature of the conversation.

If there is no direct evidence, i.e. the police hard drive, then there is nothing to turn over to defendant. However, so there is no question about what the State represents, to wit, that the police hard drive was erased and no longer exists, the Court hereby orders an in camera review, using a computer expert of the Court's own choosing, to verify whether there is any evidence relevant to this case on the police hard drive.
State v. Ross, supra. As far as I can tell, an expert examined the hard drive and said it had, in fact, been erased, so “only the paper print-out of the conversation” between Ross and the officer posing as sarah2hot420 existed. State v. Ross, supra.

Ross pled no contest to the importuning count, the other count was dismissed and the court sentenced him to “five years of community control.” State v. Ross, supra. He then appealed the trial court’s denying his motion to produce the police department’s hard drive, arguing that it “violated his due process rights” by preventing him “presenting a defense and challenging the chat room conversations.” State v. Ross, supra.

The Court of Appeals began its analysis by explaining that the Due Process Clause of the Fourteenth Amendment protects a defendant in a criminal case from being convicted when

the state fails to preserve materially exculpatory evidence or destroys in bad faith potentially useful evidence.’ State v. Bolden, [Ohio Court of Appeals]. However, the United States Supreme Court has held the State's failure to preserve evidence does not automatically mean that such failure amounts to a constitutional defect that would require a dismissal of charges. See California v. Trombetta (1984), 467 U.S. 479. In fact, when the State fails to preserve `evidentiary material of which no more can be said than that it could have been subjected to tests, the results of which might have exonerated the defendant,’ due process is only violated if the State acted in bad faith. Arizona v. Youngblood (1988), 488 U.S. 51.
State v. Ross, supra. The Court of Appeals then held that Ross had not stated a valid due process claim:
Not only has Ross failed to explain what, if any, exculpatory evidence would be revealed by examining the police department's hard drive, there is no evidence remaining on the . . . hard drive which would be relevant to Ross' case. Moreover, not only has Ross failed to show bad faith on the part of the State, but he chose not to allege bad faith as part of his motion to produce. Therefore, we find that the trial court did not err when it denied Ross' motion to produce the police department's hard drive. . . .
State v Ross, supra.

Ross also claimed that the trial court erred when it found that the transcript of the chats between him and sarah2hot420 could be authenticated by Officer Kohli’s testimony. Here’s what the Court of Appeals said on that issue:
While we agree with the statements of law made by the trial court, those conclusions as to authentication, and implicitly to the admissibility of the transcript, were premature. Here, Ross filed a motion to produce evidence. . . . While the motion to produce was based on Ross' expert not being able to `authenticate’ the transcript, this was not a motion specifically challenging the authentication of the transcript. There was never any hearing on the matter of the transcript's authentication and there was never any foundation laid for the trial court's conclusions that the transcript could be authenticated. The trial court merely speculated as to methods other than the existence of the hard drive that could be used later to authenticate the transcript.

Because the issue of the transcript's authentication was never formally presented to the trial court nor appropriately ruled upon by the trial court, the issue cannot be properly considered by this Court.
State v. Ross, supra. So Ross lost his appeal.

There are two things I don’t quite understand about this case. One is whether the police department had only one hard drive; the opinion makes it sound that way but that must not be true. It must just have been A police department hard drive that was the focus of all this. The other thing I don’t understand is how the hard drive could have been erased without a copying having been made earlier, to preserve evidence. But maybe I’m missing something here.

Tuesday, January 27, 2009

New Book

My new book is out (actually, it's been out for about a week).

It's essentially about how the traditional threats to social order -- crime, terrorism (which is a kind of crime) and warfare -- can morph and fuse when they move online.

The book analyzes the attribution and response problems this creates, but from a primarily legal perspective. That is, it focuses on how and why the morphing of the threats challenges the ability of nation-states to identify and respond to them efficiently (and accurately). A lot of us probably already know all about that.

The book also includes my effort (product of work on some prior law review articles and other things, plus a fair amount of original thinking) to figure out why the challenges exist. The short answer to that one is that our response structures categorically divide threats into internal and external, the dividing line being a physical, territorial border. Physical borders essentially become irrelevant when conduct moves into cyberspace.

It also includes my effort an figuring out what we can do to improve the ability of nation-states (or supranational organizations, or evolved corporate governance structures or whatever the dominant governance structure remains/becomes) to deal with them. My goal really is to contribute to the process of thinking about all this, making some things problematic that we take for granted.

The book is called Cyberthreats: The Emerging Fault Lines of the Nation-State, and it's published by Oxford University Press.

If you'd like to check it out, you can find it on Amazon (and other online book sites).



Monday, January 26, 2009

Passwords

Maybe you’ve seen one of the news stories about the revised Georgia statute (Georgia Code § 41-1-12) that now requires sex offenders to turn their Internet passwords, screen names and email addresses over to authorities. The purpose of the revised statute is to give authorities the ability to track what sex offenders are doing online, to, in the words of one news story, “make sure” they “aren’t stalking children online or chatting with them about off-limits topics.”

Critics of the law say it goes too far, since it will let law enforcement agents read emails a sex offender sends to anyone, including family and employers. The state senator who wrote and sponsored the legislation revising the statute concedes that it does, at least to some extent, invade the privacy of those to whom it applies. But he also says they have forfeited their privacy rights by having been convicted of a sex crime and argues that the need to protect children outweighs any privacy concerns.

Georgia is apparently one of a very few (two?) states that have expanded their sex offender registry requirements to include passwords, usernames and email addresses. The first state to do this seems to be Utah, which adopted legislation requiring sex offenders to “provide Utah's sex offender registry with all of their internet identifiers and the websites on which they use those identifiers.” Doe v. Shurtleff, 2008 WL 4427594 (U.S. District Court for the District of Utah 2008). A man affected by this legislation filed a lawsuit challenging its constitutionality. He argued that it violated his First Amendment right to free speech, which includes a right to be able to speak anonymously.

The Utah statute required that sex offenders provide the following to the Utah Department of Corrections (UDOC):
(i) Internet identifiers and the addresses the offender uses for routing or self-identification in Internet communications or postings; [and]

(j) the name and Internet address of all websites on which the sex offender is registered using an online identifier, including all online identifiers and passwords used to access those websites. . . .
Utah Code§ 77-27-21.5(12). A related statute required them also to give the UDOC “any password required for use with an online identifier.” Utah Code § 77-27-21.5(2)(c). It defined “online identifier” as “any electronic mail, chat, instant messenger, social networking, or similar name used for Internet communication.”

Doe, who was challenging the Utah statute, made a number of First Amendment arguments, but the federal judge to whom the case was assigned found that his “most compelling” argument was that the Utah statutes abridged his First Amendment right to speak anonymously online. Doe v. Shurtleff, supra. In analyzing this argument, she noted that there were no opinions dealing with this issue; there were, of course, opinions dealing with challenges to different aspects of sex offender registry statutes, but not this particular issue. So the judge was, as she noted, “in wholly untested legal waters.” Doe v. Shurtleff, supra.

She therefore relied on Supreme Court dealing generally with the right to anonymous speech, one of which was McIntyre v. Ohio Elections Commission, 514 U.S. 334 (1995). In McIntyre, the Court explained that
[a]nonymity is a shield from the tyranny of the majority. It thus exemplifies the purpose behind the Bill of Rights, and of the First Amendment in particular: to protect unpopular individuals from retaliation-and their ideas from suppression-at the hand of an intolerant society. The right to remain anonymous may be abused when it shields fraudulent conduct. But political speech by its nature will sometimes have unpalatable consequences, and, in general, our society accords greater weight to the value of free speech than to the dangers of its misuse.
The Utah judge noted that the Supreme Court has also recognized “the importance and unique nature of the Internet as a virtual `marketplace of ideas.” Doe v. Shurtleff, supra (quoting Reno v. American Civil Liberties Union, 521 U.S. 844 (1997)). And she pointed out, quite correctly, that courts have combined these two principles to hold that the First Amendment protects anonymous online speech. Doe v. Shurtleff, supra.

The defendants in the Utah case (who included the Utah Attorney General) did not
directly challenge the right to anonymous speech online. Instead, they contend that because he is a sex offender, Mr. Doe has relinquished that right. Defendants cite cases in various other contexts that have approved curtailing the constitutional rights of sex offenders and felons. Defendants do not cite any authority, however, supporting the proposition that a sex offender who has completed his prison term and is not on parole or probation gives up First Amendment rights.
Doe v. Shurtleff, supra. So they made the same argument the sponsor of the Georgia legislation is making as to why that statute is not unconstitutional.

The judge disagreed. After reviewing cases, she found that
Mr. Doe has not given up his right to anonymous internet speech because of his status as a sex offender. . . . First, the United States Supreme Court has held that even people in custody have First Amendment rights, although restrictions on those rights are scrutinized under a low standard. . . Second, the [U.S. Court of Appeals for the Tenth Circuit] has ruled that a complete, unconditional ban on internet access as a condition of supervised release is overly broad and impermissible.
Doe v. Shurtleff, supra. The Utah judge found that the fact Doe, the plaintiff in the case, retained his First Amendment right to anonymous speech was “bolstered by the fact that Mr. Doe is not on parole or subject to supervised release.” Doe v. Shurtleff, supra.

She also found that the Utah statutes infringed on his right to anonymous speech: “If Mr. Doe provides the UDOC with his Internet information and knows that there are no statutory limits on how that information can be used by the UDOC, or others, he is less likely to engage in protected anonymous speech.” Doe v. Shurtleff, supra. The judge then had to decide if the infringement violated the First Amendment. Georgia Code § 41-1-12(o). That might open the statute up to a challenge based on the holding in the Doe v. Shurtleff case.

The infringement would NOT violate the First Amendment if (i) it was being imposed to protect a compelling government interest and (ii) it was the least restrictive means available to accomplish that end. Doe v. Shurtleff, supra. The judge found it was not:
Utah undoubtedly has a compelling interest in protecting children from internet predators and investigating online crimes, which are the stated goals of the Registry Statute. The Registry Statute appears to achieve these ends. For example, if the UDOC makes sex offenders' internet information immediately available to investigators, investigations into potential crimes originating online could be hastened. Moreover, knowing that police will have their internet information would probably discourage some sex offenders from using the internet to help them commit crimes.

The only question is whether the Registry Statute's disclosure requirements are the least restrictive means available to meet these goals. They are not. With no restrictions on how the UDOC can use or disseminate registrants' internet information, the Registry Statute implicates protected speech and criminal activity alike. An alternative statute that contained such restrictions would be similarly effective and less threatening to protected anonymous speech.
Doe v. Shurtleff, supra. The defendants asked the judge to interpret the statutes as only letting the UDOC use the Internet information a registrant provided for the purpose of conducting criminal investigations and as barring the UDOC from releasing it to the public. Doe v. Shurtleff, supra. She found that doing this would in effect require her to re-write the statute, which was a job for the Utah legislature. The judge therefore held that the Utah statute violated the First Amendment.

Would a court reach the same conclusion as to the revised Georgia statute? I don’t know. The Georgia statute says the information collected pursuant to its requirements “shall be treated as private data” except that it can be disclosed to law enforcement agencies to law enforcement purposes or to agencies conducting background checks. Those don’t seem particularly problematic. It also says that the Georgia Bureau of Investigation “or any sheriff maintaining” records under this legislation shall, in addition to informing the public about sex offenders living in their community, “release such other relevant information collected under this section that is necessary to protect the public concerning sexual offenders”.

Friday, January 23, 2009

Technology as an Aggravating Factor

As you may have seen, the Supreme Court rather recently granted certiorari in U.S. v. Abuelhawa, 523 F.3d 415 (U.S. Court of Appeals for the Fourth Circuit 2008). That, of course, means the Court will hear arguments on whether the Fourth Circuit’s decision is correct.

Abuelhawa was convicted of violating 21 U.S. Code § 843(b), which makes it a crime “knowingly or intentionally to use any communication facility in committing or in causing or facilitating the commission of” any federal drug crime.

Abuelhawa’s conviction was based on his using his cell phone to call a drug dealer and essentially order a delivery of cocaine. U.S. v. Abuelhawa, supra.

He appealed his conviction to the Fourth Circuit, arguing that if he had walked up to the drug dealer on the street and bought cocaine from him in person, all he could be charged with was violating 21 U.S. Code § 844(a), which is a misdemeanor. Abuelhawa claimed it was illogical and excessive to charge him with a felony under § 843(b) simply because he used a cell phone to arrange the purchase. He lost, but the Supreme Court has agreed to hear the case, so maybe the Justices think that result is not correct, after all.

While Abuelhawa’s case has nothing to do with computers, as such, § 843(b) does apply if someone uses a computer to facilitate the commission of a federal drug crime. In U.S. v. Long, 2006 WL 689125 (U.S. District Court for the District of Wisconsin 2006), for example, the § 843(b) charge was based on the defendant’s using email to facilitate federal drug crimes.

This post isn’t really about Abuelhawa’s case. It’s about using technology as a factor eitehr to aggravate the severity of the offense someone is charged with or using it to aggravate the sentence someone receives after being convicted of what is essentially a generic crime.

As I’ve pointed out here and elsewhere, computer technology can have an impact on how we commit traditional crimes, like fraud or theft. We can incorporate that impact into the law in either of two ways: One is to adopt new, cyber-specific criminal laws; so, as I noted in an earlier post, we wind up with computer theft statutes that make it a crime to steal a computer. I, for one, don’t see that we always (even often) need to create new, computer-specific laws in order to address the effects of computer technology. In many instances we can address the impact of computer technology by simply revising how we define a traditional crime.

As I noted in an earlier post, for example, the use of computers can alter how we commit theft. In the real-world, theft is zero-sum: I take your laptop, which means I have it and you do not. In the digital world, I can copy your data, which means I have it and so do you; the problem, of course, is that you have been deprived of a quantum of the value of the data, i.e., the ability to exercise sole control over its possession and use. That kind of theft often won’t work under traditional theft statutes because they spoke of stealing “tangible property” and/or of taking property with the intent to permanently deprive the owner of its possession and use. But, as I’ve noted here and elsewhere, we don’t need to adopt a new “computer theft” or “data theft” statute to address that kind of activity; we can simply expand the scope of our theft laws so they encompass both kinds of theft, zero-sum and non-zero-sum theft.

That brings me to the other way we can incorporate the impact of computer technology into our criminal law. If we take the approach to theft I outline above, we simply bring computer theft into the law as a type of generic theft. That means the penalties for computer theft will be the same as the penalties for traditional theft of tangible property. Some people will argue that the penalties for computer theft should be higher because a criminal can cause more damage by using a computer to commit theft. I can copy the rightful owner’s data and share it with 5, 10 or 100 people; I can steal data from a lot of people in a very short period of time, something a real world thief could not do.

You get the idea, I’m sure. And that brings me to the other option: Instead of creating new, computer-specific crimes, we could address the incremental “harm” the use of computer technology incorporates into the commission of traditional crimes like fraud and theft by making the use of the technology an aggravating factor at sentencing. Making the use of certain tools an aggravating factor is something we already do with weapons, for example. We make the penalty imposed for the commission of certain crimes – robbery, say – greater if you used a gun in committing the crime.

Guns and other weapons were made aggravating factors in sentencing because they create the risk that someone will be hurt or killed in the course of my committing what is really a property crime. So, some criminal tools aggravate sentences because they pose a source of danger to victims and bystanders. Other criminal tools aggravate sentences or the level of the offense someone is charged with under a very different theory – the theory responsible for § 843(b).

The premise of § 843(b) seems to be that the use of communications technology aggravates the offense level because such technology lets a drug dealer commit more crimes than he or she can do otherwise. A drug dealer can use cell phones and/or computers to organize and run his drug organization and, as in the Abuelhawa case, to arrange drug buys more efficiently than he might otherwise be able to do. That makes sense, I think, at least to some extent. Abuelhawa is arguing that while that makes sense when it comes to the person who is selling drugs, it makes no sense at all for someone like Abuelhawa, who only used his cell phone to buy drugs. That seems to be the issue the Supreme Court will hear, so we’ll see how they parse that out.

If they uphold Abuelhawa’s conviction, the decision will presumably validate the premise that aggravating a sentence or an offense level based on the use of technology legitimately applies regardless of what your role in the commission of the crime was. So if, say, a state wanted to make the use of computer technology an aggravating factor -- in sentencing or in setting the offense level – for those charged with violating the laws against prostitution, the aggravating factor could be applied both to the person who was running the prostitution operation (the pimp) and to customers who used email or other computer communications to set up “dates” with the prostitutes. That's a very simple example. If the Court upholds Abuelhawa’s conviction, the principle that establishes could be used to incorporate the use of technology as an aggravating factor into a variety of criminal statutes, which might or might not be a good idea, depending on how it was done.

If, on the other hand, the Supreme Court buys Abuelhawa’s argument and holds that the use of technology as an aggravating factor can only be applied to the actual perpetrator of the crime – the person who sells drugs or panders prostitutes, say – that would prevent the technique’s being used as expansively, but it could still be used to increase the exposure the primary perpetrator faces, if caught.

Or the Supreme Court’s decision in the case may have little effect, in practice. I don’t know that there’s a great deal of interest in expanding the use of computer technology as a sentence or offense aggravator, and I don’t know that there needs to be.

I can see an argument for doing that with certain crimes, such as fraud. Using computer technology gives fraudsters the ability to commit a lot more fraud than they could do if they had to contact each of their victims individually, at least in the initial contact. I don’t think it makes sense to use computer technology to increase the offense level, but I can see it as a relevant factor in sentencing.

Wednesday, January 21, 2009

Chutzpah in Kansas City

Maybe you saw this story: It says a Kansas City woman has been indicted by a federal grand jury for using her U.S. Department of Agriculture laptop to manage “prostitution businesses and correspond with clients”.

According to the story, Laurie Lynn McConnell, a statistician for the USDA’s Risk Management Agency, operated the prostitution businesses with John Miller, who also lives in Kansas City. It says the two – using the names Dark Phoenix and USA Honies – used the USDA laptop, cell phones, Pay Pal and “other electronic means” to operate their businesses. (Why, I wonder, didn’t they just buy their own laptops?)

I can’t find a lot of details online about their operation, but the story I’ve cited says they recruited prostitutes, advertised their services and took $100 from the fee paid for each “appointment.” McConnell and Miller advertised their businesses on Craigslist and other sites, screened potential clients and established a series of PayPal accounts that let clients pay the prostitution fees online.

The story says McConnell and Miller are charged with (i) conspiring to use computers, cell phones and other electronic devices to promote prostitution and with (ii) conspiracy to commit money laundering. What about a computer crime charge?

McConnell apparently exceeded the scope of her authorized access to use the USDA laptop. I don’t know what the USDA’s computer use policies for its employees are, but I suspect they would not encompass using a USDA laptop to run a business – especially an illegal business – on the side. So, McConnell presumably exceeded her authorized access to the USDA laptop.

The problem with using that as the basis of a charge of violating 18 U.S. Code § 1030 (which, as I’ve noted before, is the basic federal computer crime statute) is that § 1030 only makes exceeding authorized access a crime when certain conditions are met. Under § 1030(a)(2), it’s a crime to intentionally exceed one’s authorized access to a computer and thereby obtain information from any of the following: a financial institution, a federal agency or a “protected computer.” A protected computer is a computer used in interstate or foreign commerce, so the laptop I’m writing this on (like your computer, if you’re reading this) qualifies; being hooked to the Internet is itself enough to satisfy the interstate or foreign commerce requirement.

From the very little I know about this case, I don’t see any basis for charging McConnell under § 1030(a)(2). She presumably exceeded her authorized access to the USDA laptop, but I don’t see any indication that she did so in order to obtain information from a federal agency or from someone else’s computer. She was simply using the laptop as a tool to facilitate the conduct of her illegal prostitution business.

Section 1030(a)(4) makes it a crime to exceed one’s authorized access to a computer in order to execute a scheme to defraud and obtain “anything of value” (other than the use of the computer if such use is not worth more than $5,000). Again, based on the very little I know, I don’t see any basis for charging McConnell with this crime, either; while she was (according to the charges) exceeding her authorized access to the laptop and doing so to commit a crime, the crime was prostitution, not fraud.

So not an “exceeding authorized access” case, at least not under federal law. Chutzpah, though, definitely.

Monday, January 19, 2009

Motion to Quash Search Warrant

I ran across something I wasn’t familiar with in a recent decision from a federal district court in Florida: U.S. v. Shaygan, 2009 WL 86678 (U.S. District Court for the Southern District of Florida).

In Shaygan, the defendant filed a motion to quash a warrant to search a laptop. Here is how the issue came up:
On February 11, 2008. . . the Defendant gave the DEA agents written consent to search that office. In the course of that search the agents found a laptop computer, inside an unzipped carrying case on the floor next to the reception desk. . . .

After the laptop was seized, Agent Wells applied for a search warrant to search its contents, and in his application reported the Defendant's post-arrest statement that he used the laptop for work, including to store patient files. The Honorable Barry L. Garber issued the warrant on March 26, 2008; the parties later agreed to stay the execution of that warrant so the Defendant could bring this motion. The warrant authorizes agents to search the computer for eight different categories of electronic records, and Defendant argues that probable cause does not support the search for . . . four categories of records.
U.S. v. Shaygan, supra.

In his motion to quash the search warrant, Shaygan argued that it should be quashed – or nullified – because it was not supported by probable cause. It seems to me that argument really goes only to the four categories he identified as not being supported by probable cause, but I’m not really concerned about Shaygan’s motion, as such.

What interests me is the notion of moving to quash a search warrant. As I explained in an earlier post, the process of challenging a search warrant is usually retrospective; that is, it usually involves a challenge brought after the warrant has been executed. As I also noted in that post, the retrospective challenge can be brought by either of two motions: a motion to suppress evidence found by executing the search warrant; and a motion for the return of property seized pursuant to a search warrant.

I’m sure we’re all familiar with motions to suppress, so I won’t spend much time on them. When you move to suppress, you’re saying the government shouldn’t be allowed to use evidence it’s already found and has in its possession. So someone who files a motion to suppress is trying to prevent the government from using evidence it has seized, either pursuant to a search warrant or to an exception to the warrant. A motion to suppress is, therefore, filed after someone has been charged with a crime.

When you move for the return of property, you’re saying, in effect, that the government seized your property in order to do something with it (search it for evidence, which is the usual dynamic when a computer is at issue, or use the item(s) seized as evidence at trial, which is the usual dynamic when the item seized is tangible evidence) but the government has already done what it needed, so you should get your property back. So someone whose computer was seized pursuant to a warrant might file a motion for return of property, arguing that the government has searched the computer (maybe made a mirror image of its hard drive) and therefore has no further need for it. In one case, a defendant who had pled guilty, been sentenced and waived any right to appeal his guilty plea or sentence moved to have the computer that was a source of evidence against him returned to him so his girlfriend could use it while he served his time. A motion for return of property does not challenge the government’s ability to search the property or use what it finds at trial or sentencing. Motions for return of property are about the property, as such. They therefore can be filed before someone has been charged or after they’ve been charged (or convicted).

Motions to quash a search warrant are prospective, rather than retrospective, and they have a very different function from motions to suppress or to have property returned. A motion to quash search warrant, as you can see from the facts in the Shaygan case, is an attempt to prevent the government from executing a warrant it has obtained but has not yet executed. As far as I can tell, a motion to quash a search warrant is usually, if not always, filed before someone has been charged with a crime. That’s only logical, I suppose, since the motion seeks to prevent the government from conducting a search and finding evidence which, presumably, will be used to charge the person with a crime.

Your ability to move to quash a search warrant apparently depends on what jurisdiction you’re in. The Shaygan court, like other federal courts, entertained that motion, so it presumably works under federal law. And motions to quash a search warrant seem to be common in California. I found a recent Missouri case, though, which indicates they’re not available in every state.

In In re Search Warrant for 415 Locust Street, 2008 WL 4861953 (Missouri Court of Appeals 2008), two business owners filed a motion to quash the search warrants that had been executed at their businesses. The officers who executed the warrants seized paper and electronically-stored documents; the motion asked the court to quash the warrants and, I gather, prevent the state from actually searching the hard and soft copy documents. (They also wanted them returned, but that’s a different issue.)

After the trial court denied the motion to quash the warrants, the business owners appealed the ruling. The Missouri Court of Appeals held that they could not file a motion to quash the warrants because of a change in Missouri law, i.e., the repeal of a former rule of Missouri criminal procedure. As the court explained,
When the supreme court repealed Rule 33.03 and declared . . . Chapter 542 governed procedure in searches and seizures, no provision comparable to Rule 33.03(b) was enacted. While section 542.296.7 provides for the return of seized property to the movant, it is only upon the court's sustaining the motion to suppress. Section 542.296 contains no mechanism for challenging the lawfulness of a search and seizure and requesting the return of seized property outside of filing a motion to suppress in the pending criminal proceeding. Unlike former Rule 33.03(b), the present statutory scheme for challenging an unlawful search and seizure does not provide for a separate motion to quash the search warrant.
In re Search Warrant for 415 Locust Street, supra. So the motion to quash a search warrant seems to work in the federal system, is definitely available in California and may be available in other states (except for Missouri), as well.

I’m trying to figure out what a motion to quash a search warrant accomplishes. So let’s consider some scenarios.

First, assume police execute a search warrant for 3 stolen handguns. They find guns matching the description and serial numbers of the stolen guns and seize them. The person whose property was searched can’t file a motion to quash the search warrant because it’s already been executed; the government has done the searching it needs to do and has found all the evidence it needs. There therefore is no point in moving to quash the warrant. If the government used the guns to charge the person who had them with theft, that person can move to suppress the guns on the grounds that the warrant was flawed or improperly executed; if the government has not charged the person and they’re so inclined (and can show that these guns weren’t stolen), they can file a motion to have the guns returned to them (however unlikely that may be).

Now assume the government does what it did in the Shaygan case: Officers obtain a warrant to search someone’s home for, say, evidence of drug dealing; as they execute the warrant, they find a laptop computer and seize it. Here, they’re not interested in the laptop, as such (it’s not suspected of being stolen for example); what they’re interested in is the evidence they think the laptop contains. We’ll assume that the original search warrant does not authorize a search of the data on the laptop; the officers therefore apply for and obtain a warrant to search the laptop for evidence of drug dealing.

If they execute the search warrant and find evidence of drug dealing AND possession of child pornography on the laptop, the owner’s only option is to file a motion to suppress the evidence falling into both categories. With regard to the evidence of drug dealing, the owner might argue that the warrant, say, wasn’t supported by probable cause or signed by a neutral and detached magistrate; the goal is to nullify the government’s authority to search the laptop for evidence of drug dealing. With regard to the child pornography, the defendant will say the government can’t use it because the warrant did not authorize a search for child pornography, which is true. The government, though, may be able to argue that in the course of conducting an authorized search for evidence of drug dealing, officers found child pornography in “plain view.” As I explained in an earlier post, the plain view doctrine lets the government use evidence it finds while conducting an authorized search for another type of evidence.

If the laptop owner could successfully move to quash the warrant to search the laptop for evidence of drug dealing, he MIGHT be able to prevent the government from finding the child pornography on the laptop. That is, if the laptop owner got the search warrant quashed, the police couldn't search the laptop for evidence of drug dealing and therefore would not find the child pornography that would be in plain view.

What I don’t understand is what this accomplishes in the long term. Doesn’t it simply delay the process? It could clearly be a useful tactic if the person whose property the government wants to search can use a motion to quash from preventing the government from ever searching that property. I don’t see how that could work, though . . . since even if a court grants a motion to quash a search warrant, it presumably doesn’t bar the government from trying again, once it's figured out how to remedy the defect (e.g., lack of probable cause) in the original warrant.

Sometimes, the motion to quash a search warrant simply seems to be part of moving to suppress evidence. So in U.S. v. Barnett, 2008 WL 183560 (U.S. District Court for the Eastern District of Michigan 2008), the court granted the defendant’s “motion to quash search warrant and suppress evidence”. In instances like that, it seems to me the motion to quash a search warrant is really just a motion to suppress.

When someone is served with a grand jury subpoena, they can move to quash it based on, say, privilege. So an attorney might get a subpoena from a grand jury directing her to produce certain records; she could file a motion to quash (kill) the subpoena on the grounds that it seeks records protected by the attorney-client privilege. If the court agrees, she’s quashed (killed) the subpoena.

I just don’t see the potential for that kind of result with motions to quash a search warrant,, but I could be missing something.

Friday, January 16, 2009

Misusing Authorized Access?

In a post I did a couple of years ago, I talked about the “insider” hacking crime: exceeding one’s authorized access to a computer to do . . . something.

The premise of the crime is that we need to make it illegal for insiders – people who legitimately have access to a computer or computer system – knowingly to exceed the scope of that access in order to destroy data, copy data, install a logic bomb that shuts down the system, etc. So the "exceeding authorized access" crime is usually a kind of burglary offense; it makes it illegal to "go" somewhere (in this case, "into" areas of a computer system which you're not supposed to access) in order to cause some kind of "harm."

As I explained in my earlier post, courts often find it an onerous task to decide (and explain) precisely how and why a particular insider exceeding his or her authorized access.


Sometimes, as I’ve noted before, the insider’s employer will have written policies that very specifically define what the insider is authorized to do in terms of using the employer’s computer system; that definition by implication excludes activities other than those that are specifically authorized. So, in situations like this, courts run through the police and then infer that the defendant – the insider – knew that what he or she was doing exceeded what he/she was authorized to do, so the person committed the crime of exceeding authorized access.

Sometimes that analysis is pretty straightforward . . . as when a police officer uses a police computer to look up someone out of simple curiosity. Police departments have policies which state that the computers are only to be used for legitimate police business and the officers who use those computers are given the polices, told to read them and, in some instances, anyway, required to sign a statement saying they have read them and understand them.

Other times it can be a little more ambiguous. An employee can be explicitly authorized to use certain aspects of a computer system; implicit in the explicit authorization is the premise that this quantum of access is being authorized in order for the employee to use the computer system in a way that benefits his/her employer. The problem that can arise here is that the latter is not specifically spelled out, so that while we all know, as a matter of common sense, that the employee HAD to know what he/she was doing was way out of bounds, it’s very difficult, even impossible, to prove that beyond a reasonable doubt.

To prosecute someone for a crime, you have to prove every element, including the mental state, beyond a reasonable doubt; and when the statute makes it a crime to exceed authorized access to a computer system, that means the prosecution has to prove beyond a reasonable doubt that this particular defendant, as an individual, KNEW that at the time he/she did whatever got him/her in trouble.

That means really, knew, as in personally knew “I am not supposed to do this but I am going to go ahead and do it.” When you start using inferences from rules (and a lack of rules) to show that someone MUST have known that what he/she was doing was wrong, you’re not proving the mens rea of “knowingly.” You’re proving what the law calls recklessness, the definition of which is that you were aware of a strong possibility that what you did was not within the scope of your authorization but went ahead.


Some suggest the way to resolve all this is for employers to adopt really clear, really Draconian scope of use policies; the premise here is that such policies would deprive a defendant of the ability to make an “I didn’t know it was wrong” argument. One of my students said he worked for a company that had a policy which said, in part, “I will only use my access to the company’s computer for its benefit.” Well, that sounds pretty solid . . . but what if an employee decides that he wants to check out an aspect of security on his employer’s computer system – even though that is not his job – because he thinks he can highlight a problem and get credit for doing so (maybe even get promoted). He gives it a shot, does something wrong, causes a problem that costs the employer money to fix . . . and is charged with exceeding authorized access. I’m not sure the “use the computer for the company’s benefit” would work to establish that he knowingly exceeded authorized access in this situation. He though he was using the system for the company’s benefit.

Another suggestion is to use computer code to essentially lock people into particular access zones, so that they physically cannot exceed authorized access casually or even accidentally. They have to work at it. I can see how that might work in certain contexts, but I don’t see how it can work in others.

I was thinking about all of that, and I came up with an idea . . . maybe even a good idea. It seems to me that the problem with the “exceeds authorized access” crime is that it’s based on technology too much. In criminal law, every crime is designed to target a specific “harm”. Murder encompasses causing the death of a human being, robbery encompasses taking someone’s property from them by using force, etc.

It seems to me that the real “harm” we’re trying to go at with the “exceeds authorized access” is not so much the fact that you exceeded the scope of your authorized ability to access your employer’s (or some other owner’s) computer system. We only relied on that issue because we couldn’t prosecute you for the other alternative – unauthorized access – since you were an authorized user of the system.

Seems to me it might make sense to reconfigure the “exceeds authorized access” crime to make it a “misused authorized access” crime. Then you focus not on the really rather incidental issue of whether the access the employer (or other perpetrator) used to inflict the “harm” that led to the prosecution, but on the “harm” itself. That is, you focus on whether or not the perpetrator knew that what he/she was doing was not a legitimate use of the computer system. So if a police officer uses a police computer to look for some dirt on an old friend, he knows that’s not a legitimate use of the system; he knows he’s misusing the system.

Wednesday, January 14, 2009

Computer Theft

A lot of state statutes and the general federal computer crime statute, 18 U.S. Code § 1030, criminalize the theft of computer data. And I’ve done at least one post about computer data theft.

As I explained, computer data theft is a little different from real-world theft because the data being stolen is usually copied . . . which means that both the thief and the rightful owner have the data. But, as I also explained in that post, the law can accommodate data theft by simply recognizing that the transfer of any quantum of the property – of the data – results in a loss to the rightful owner. Since theft criminalizes the act of depriving the owner of (some of his or her) property, copying data constitutes theft.


This post, though, is about a different kind of computer theft. Rhode Island has a computer theft statute that dates back to 1983 and that baffles me. Here is how the statute read a few years after it was adopted in 1983:
Whoever, intentionally and without claim of right, and with intent to permanently deprive the owner of possession, takes, transfers, conceals or retains possession of any computer, computer system, computer network, computer software, computer program or data contained in such computer, computer system, computer program or computer network with a value in excess of five hundred dollars ($500) shall be guilty of a felony and shall be subject to the penalties set forth in [another statute].
Rhode Island General Laws § 11-52-4 (1986). Here is the current version of this same statute:
Whoever, intentionally and without claim of right, takes, transfers, conceals or retains possession of any computer, computer system, computer network, computer software, computer program, or data contained in a computer, computer system, computer program, or computer network with a value in excess of five hundred dollars ($500) shall be guilty of a felony and shall be subject to the penalties set forth in [the same statute]
Rhode Island General Laws § 11-52-4 (2008). In 2006, the Rhode Island legislature modified the language of the statute so it no longer speaks of taking any of the items it lists “with intent to permanently deprive the owner of possession” of the item. That, I assume, was intended to incorporate the non-zero-sum conception of theft (data theft) I described above.

I don’t have a problem with that, or with making it a crime to steal data or software. What I have never understood about this statue is why the Rhode Island legislature found it necessary back in 1983, and still finds it necessary, to have a statute that specifically makes it a crime to steal computer hardware. Since computer hardware is tangible property, stealing it should fall within the scope of the state’s regular theft statutes.

I can’t find any cases or legislative history or law review articles that explain why the legislature did this. Maybe it just seemed like a good idea back in 1983, since that’s when personal computers were coming in, and they were a lot more portable (and therefore a lot more stealable) than the mainframes that preceded them.

Or not. I don’t know. If anyone does, I’d love to learn what the purpose of the statute really was.

Monday, January 12, 2009

Computer Forgery

This post is about computer forgery . . . or, more accurately, about what is not computer forgery.

Like many of my posts, this one is inspired by a case: People v. Carmack, 34 A.D.3d 1299, 827 N.Y.S.2d 383 (Supreme Court of New York 2006). In 2003, an Erie County Grand Jury
charged [Howard Carmack] with three counts of Forgery in the Second Degree, arising out of [his] actions in sending electronic mail messages (`e-mail’) purporting to have originated from three different e-mail accounts without permission from the accounts' owners. The Grand Jury also charged [Carmack] with one count of Criminal Possession of Forgery Devices for using a computer software program to send these e-mail messages that was specifically designed to alter or create false written instruments.
Respondent’s Brief in People v. Carmack, supra.

The forgery charge against Carmack was brought under § 170.10[1] of the New York Penal Law. Under § 170.10[1], someone commits forgery
when, with intent to defraud, deceive or injure another, he falsely makes, completes or alters a written instrument which is or purports to be, or which is calculated to become or to represent if completed . . . [a] deed, will, codicil, contract, assignment, commercial instrument, credit card . . . or other instrument which does or may evidence, create, transfer, terminate or otherwise affect a legal right, interest, obligation or status.
The other charge against Carmack was brought under New York Penal Law § 170.40[1]. Under this statute, one commits the crime of possessing forgery devices if he “makes or possesses with knowledge of its character any plate, die or other device, apparatus, equipment, or article specifically designed for use in counterfeiting or otherwise forging written instruments”.

Carmack went to trial and was convicted on both charges. More precisely, he was convicted of 3 counts of forgery and 1 count of possessing forgery devices. On appeal, his attorney noted that the prosecution was brought by the New York Attorney General’s Office under the
novel theory . . . that [Carmack] committed a number of white-collar Penal Law crimes through the sending of bulk commercial e-mail (also known as `spam’), using already existing computer domain (i.e., e-mail) names without permission and creating other domain names after obtaining identification information without permission. No money was stolen. The computer said to be used was located in a residence on Parkridge Avenue in Buffalo, owned by appellant's mother
Appellant’s Brief in People v. Carmack, supra. This is how Carmack’s attorney described what he had actually done:
Some people received returned commercial advertisements in their e-mail boxes; sent by someone else in their own unique e-mail names. When not going to a proper address, this e-mail was returned to where the Internet server believed it originated. Others had their names and addresses used to create new e-mail accounts to send out ads. The ads were for spy software, dietary supplements and cable television descrambling kits.. Contact information in the ads mostly included a common phone number, 716-812-2144, the name, CSC Quick Products, and the addresses, 266 Elmwood Avenue, #172, and 341 Parkridge Avenue, both in Buffalo. . . .

The supposed motive . . . was to mass market products for free - - avoiding Internet service fees - - and hiding where the ads came from. . . . Accounts were opened and passwords were obtained using the legally purchased `Stealth’ mass mailer software program, allowing one to send out multiple e-mail advertisements at a time. These ads were made to look like they came from a different sender.
Appellant’s Brief in People v. Carmack, supra. Based on this, Carmack was apparently known in some circles as the “Buffalo Spammer.” Appellant’s Brief in People v. Carmack, supra.

Carmack challenged his conviction on the charges, claiming that the evidence presented at trial did not prove he’d committed either forgery or possession of forgery devices. The New York Supreme Court agreed. After noting the definition of forgery quoted above, it explained that at trial the Attorney General’s office presented evidence that Carmack
sent multiple e-mails from his computers but used a computer program that made it appear that they were sent from the e-mail address of another person or entity. The e-mails at issue, i.e., commercial solicitations for computer programs, dietary supplements, and other products, do not constitute deeds, wills, codicils, contracts, assignments, commercial instruments or credit cards, nor do they `evidence, create, transfer, terminate or otherwise affect a legal right, interest, obligation or status. Thus, the e-mails do not constitute instruments that may be the subject of the crime of forgery . . . under Penal Law § 170.10(1).
People v. Carmack, supra. The court also found that the evidence presented at trial did not prove Carmack possessed a forgery device:
Similarly, the People failed to establish that the computer program used to send the e-mails was a forgery device within the meaning of Penal Law § 170.40(1). That Penal Law section criminalizes possession of any `device, apparatus, equipment, or article specifically designed for use in counterfeiting or otherwise forging written instruments’ and, here, the People's expert testified that the program at issue `can be used for very legitimate purposes absolutely,’ thus negating an essential element of the crime.
The Supreme Court therefore reversed Carmack’s convictions on both charges.

Seems like a no-brainer, doesn’t it? What Carmack allegedly did so clearly does not fit within the definition of forgery (and, consequently, of the possession of forgery devices) it seems peculiar that the New York Attorney General’s office would have brought the charges. All I can assume is that they were using them as a substitute for the criminal spam statute New York did not have (and, as far as I know, may still not have).

As to what constitutes computer forgery, well, it’s just using digital technology to commit traditional forgery. Forgery, as the New York statute illustrates, has traditionally been falsifying a document in order to obtain money or other property to which the forger is not lawfully entitled. A few states have adopted statutes designed to make it clear that digital modifications of data or documents constitutes forgery. Here’s Georgia’s statute:
Any person who creates, alters, or deletes any data contained in any computer or computer network, who, if such person had created, altered, or deleted a tangible document or instrument would have committed forgery [under the general forgery statute], shall be guilty of the crime of computer forgery. The absence of a tangible writing directly created or altered by the offender shall not be a defense to the crime of computer forgery if a creation, alteration, or deletion of data was involved in lieu of a tangible document or instrument.
Georgia Code § 169-9-93(d). Virginia and West Virginia have statutes that are almost identical to Georgia’s computer forgery provision. Virginia Code § 18.2-152.14; West Virginia Code § 61-3C-15.

I keep looking for computer forgery prosecutions, but Carmack’s is the only one I have found. I wonder if that means people aren’t committing computer forgery or if it means they’re committing it but not getting caught . . . .

Friday, January 09, 2009

(Attempted) Computer Murder

There has for years been speculation about whether a computer could be used to commit murder, i.e., to intentionally kill a human being.

I was researching that issue for something I’m writing, and managed to track down what seems to be the original version of a story I’ve heard for a long time. I thought it might have been an urban legend -- a purely apocryphal tale about attempted . . . something, maybe murder, maybe manslaughter. (Basically, murder is intentionally causing the death of another human being, while manslaughter is recklessly causing such a death.)

According to a story published in 1994, Dominic Rymer, then a 21-year-old male nurse in the United Kingdom, had hacked into the computer system at Arrowe Park Hospital, Wirral and modified the prescriptions for two patients. Nurse- hacker Alters Hospital Prescriptions, Computer Audit Update (February 1, 1994), 1994 WLNR 3804526. Here’s how the story described what Rymer did:
A nine-year-old boy, suffering from meningitis was only saved from serious harm by a sharp-eyed ward sister. She spotted that the youngster's prescription had been altered the previous day to include drugs used to treat heart disease and high blood pressure and an investigation was immediately launched.

It was then discovered that . . . Rymer had also secretly used the computer system at Arrowe Park Hospital . . . to prescribe anti -biotics to 70-year-old Kathleen Wilson, a patient on a geriatric ward. She had been given the drug, but had suffered no adverse reaction.

Rymer, described as obsessed with computers and high-tech equipment, had also accessed other records. He had scheduled a patient to have an unnecessary X -ray and recommended that another patient be discharged.
Nurse-hacker Alters Hospital Prescriptions, supra.

Rymer was not charged with attempted murder but with unauthorized access to a computer that resulted in damage to data in the system. At trial, the prosecutor told the court that
Rymer used a doctor's pin number to access the computer at the hospital. He had memorized the number five months earlier, after observing a locum doctor having trouble accessing the system. Rymer altered the prescription for the nine-year-old boy suffering from suspected meningitis, and prescribed a potentially toxic drug cocktail of Atenol, Temazepim, Bendroflumethiazide and Coproxomal.
Nurse-hacker Alters Hospital Prescriptions, supra. According to the story, Rymer was “unable to explain why he had altered the treatment records, but denied having any malicious intent. He had developed a fascination for computers . . . and had developed a lack of sensitivity to the consequences of his actions.” Nurse-hacker Alters Hospital Prescriptions, supra.

The judge found Rymer guilty and sentenced him to a year in jail. And the hospital’s executive nurse said “tighter computer security” was implemented to ensure this did not happen again. Nurse-hacker Alters Hospital Prescriptions, supra.

I assume Rymer wasn’t charged with murder because there was no evidence that his purpose was to kill people; he may simply have been experimenting. I’m not sure why he wasn’t charged with manslaughter or whatever the analogous crime is under UK law. It seems to me if someone hacks into a hospital’s prescription database and changes people’s prescriptions, that person could be held liable for recklessly causing their death if the altered prescriptions had a lethal effect. Recklessly means that the defendant was aware there was some risk death could result from what he was doing but ignored it, and proceeded to alter the prescriptions.

This case seems to be an artifact. I can’t find any cases since that involve conduct that could arguably be characterized as computer-facilitated homicide of whatever type . . . murder, manslaughter, negligent homicide. Perhaps that’s because hospitals have secured their systems so well that this can’t happen . . . except, of course, Rymer was an insider, a nurse who had worked at this hospital until a year or so before he altered the prescriptions. And as we all know, it’s very difficult to secure systems from insiders.

Maybe there haven’t been any incidents (reported incidents, I should say) of computer homicide because those who commit homicide prefer real-world methods. Maybe those who are inclined toward homicide tend to assume real-world methods are more reliable . . . which might be a reasonable assumption given what happened in the Rymer case.

I’ve always assumed (or maybe I’ve always just hoped) that if someone were to try something like this, the hospital staff would figure out that there’s something wrong with the prescription that’s suddenly shown up for a particular patient. I tend to assume the same problem would arise here as would have arisen with a computer homicide scenario that cropped up ten or so years ago, when I was just getting into cybercrime.

I can’t find any accounts of that scenario, so I’ll describe it as best I can. The theory was that a hacker (a truly malicious hacker) would access the databases of a company that makes children’s cereal. He would alter the recipe for a popular kind of cereal so that it would include a very high level of some mineral or chemical that is not toxic to people in small doses but that becomes toxic if consumed in large doses, especially if one were to consume large doses every day or so. According to the scenario, this would be a clever way to commit a particularly heinous kind of mass murder; children would die, the cereal company would be blamed for its negligence in goofing up the recipe, and the hacker would have gotten away with murder scot free. The problem, of course, was that the cereal company would probably have noticed the alteration in its formula . . . especially since, as many pointed out, it would suddenly be buying tremendous quantities (truckloads) of that chemical.

I’m really not going anywhere with this post. It’s just a rumination on the notion of computer as an implement of homicide which, of course, it can and probably will be. If and when someone actually does use computer technology to commit murder, they will simply be charged with murder (or manslaughter or negligent homicide, as the case may be) because the tool used to commit the crime is irrelevant. However you cause another to die, that’s murder.

Wednesday, January 07, 2009

Fear

I’m reading a great book: The Science of Fear by Daniel Gardner.

I highly recommend it. It’s an analysis of how and why our fears are often irrational; we fear things that are not particularly likely, and don’t fear things we should.

In one of this early chapters, he talks about how people refused to fly in the months after 911, and drove instead. He explains even if the risk of a terrorist attack on an airplane in the post-911 era had been much higher than it was, it would still have been much safer to fly than to drive. He also points out how many people died because they drove instead of flying.


Why did that do that? Why did so many people react in such an irrational manner to what was truly a terrifying event?

Gardner explains that we have two systems of thought: System One is the unconscious, intuitive system of thought that is to some extent an artifact of our evolution. As Gardner says, it’s our gut instinct . . . the immediate response we needed when a lion showed up or we were confronted by some other visceral, physical threat when we lived in a more threatening world. System Two thought is the rational mode of thought we have evolved over the last millennia; it’s the more modern system of thought.

As Gardner explains, the people who refused to fly after 911 were giving in to System One thought. They were not assessing the situation rationally; they were reacting to the horrible images and stories they’d seen on TV and in newspapers. So from an outside observer’s perspective, what they did was irrational and foolish (not to mention self-destructive). That, he explains, doesn’t matter. From what I’ve read so far, it seems System One thought will trump System Two thought in any situation in which physical danger crops up (and maybe in others, as well . . . I’m not that far into the book.)

All of that made me think about how we – as a species – react to cyberthreats. It seems to me there’s a lot of System One thought going on when it comes to cyberthreats.

I don’t know about you, but whenever I mention to a “civilian” (i.e., to someone who isn’t a lawyer or someone who works in the cybercrime or computer security area) that my specialty is cybercrime, I almost always get the same reaction. They always talk about how horrible and frightening the online predators are. No one ever wants to talk about what I, for one, see as the more interesting, more serious concerns . . . the attacks on businesses, government agencies and other targets, the fraud and extortion, etc. No . . they want to talk about some “horrible” story they “heard” about a pedophile online.

I’ve never understood that. I’ve never understood why these people seem to find the notion of online pedophiles to be so spectacularly frightening when, as far as I can tell, a child is much more likely to be harmed in the real, physical world by someone he or she knows . . . Uncle Fred or the soccer coach or the Boy Scout leader or the neighbor, etc. That, I guess, is giving into System Two thought.

I’m still reading the book, and I’m still thinking about all this, but what I’ve read so far makes me wonder if System One thought doesn’t explain several things . . . one is the phenomenon I noted above, i.e., the focus on the army of pedophiles who are trolling the Internet to do uncertain things to unsuspecting children. I know pedophiles and perverts can stalk and harass and do other things to children in cyberspace, but I also don’t see how the “harm” they inflict rises to the level of the “harm” inflicted on children by the pedophiles they know. (I also don’t see why parents can’t take steps to minimize or eliminate the risk of online victimization, but that’s another issue.)

What I’m really wondering about is if the problem computer security people and others who try to convince people to secure they systems and generally protect themselves (and their employers) from cyberthreats face is that they’re relying on System Two thought. From what I’ve read so far, it’s pretty clear that System Two thought is dull, compared to System One thought, and is therefore less likely to motivate us to do things. We buy Brinks and ADT home security systems because we see the scary commercials on TV and read scary stories about home invasions; System One though at work.

We don’t really do much of anything to secure our computers and protect ourselves from online criminals because . . . it’s just not scary. It’s more of a cerebral threat, and while we can at some level understand those threats, they just don’t motivate us in the way home invaders and lions on the loose do. It’s an interesting notion. I’ll have to give it more thought.