Monday, August 31, 2009

New Border Search Directives

As you may know, on August 27 the Department of Homeland Security announced its “new directives on border searches of electronic media.” There’s a Customs and Border Patrol (CBP) policy and an Immigrations and Custom Enforcement (ICE) policy. You can find both policies here.



DHS said the policies are intended “to enhance and clarify oversight for searches of computers and other electronic media at U.S. ports of entry—a critical step designed to bolster the Department’s efforts to combat transnational crime and terrorism while protecting privacy and civil liberties.” DHS explained that the “directives address the circumstances under which CBP and ICE can conduct border searches of electronic media—“consistent with the Department’s Constitutional authority to search other sensitive non-electronic materials, such as briefcases, backpacks and notebooks, at U.S. borders.”



As I’ve explained in several posts, the Constitutional authority to which the DHS refers is an exception to the 4th Amendment’s requirement that federal and state officers obtain a warrant before they search a place or a thing. The exception is based on an ancient principle, i.e., that a sovereign has the right to control what comes into and goes out of its territory. The border search exception therefore lets officers search property being carried by people entering the United States as well as those who are leaving it.



No one questions the validity of the exception or its applicability to things we carry into or out of the country. The issue that’s arisen over the last five or six years goes to the scope of the exception as it applies to laptops and other electronic media. As I’ve noted, the 4th Amendment gives us the right to be free from “unreasonable” searches and seizures, which means “reasonable” searches and seizures are legitimate.



To be reasonable a search or seizure has to be (i) reasonable at its inception (i.e., authorized by a search warrant or by an exception to the warrant requirement) and (ii) reasonable in scope. So if a police officer gets a warrant to search my home for a stolen 40-inch plasma TV, he is authorized to initiate a search for that item; for that search to be reasonable, the officer can only look where the item listed in the warrant can be, which means he can’t look in my dresser drawers or other places that couldn’t possibly contain the TV. And he can only search until he finds what he’s looking for; at that point, the authorization for the search is exhausted and he has to quit.



In several cases, defendants have argued that the scope of a border search of a laptop or other electronic media should be more limited than the scope of a search of luggage or other property. If a Customs or other border officer stops me and wants to search my luggage, he/she can go through the whole bag. That’s been the default standard for the scope of a border search; the officer can look through the entire contents of the property the person is carrying into or out of the U.S. The defendants in these case argued that the scope should be narrower for laptops and other electronic media because laptops, in particular, contain such a massive amount of information; defendants claimed that this creates a heightened level of privacy, and heightened expectation of privacy, in laptops and other electronic media . . . but those arguments have been pretty unsuccessful.



The new DHS policies seem to be responding to the issues raised in those cases, as well as the need to prescribe standardized practices for carrying out border searches of electronic media. DHS has issued both policies plus a statement outlining the privacy implications of the policy; since both policies are quite detailed, I’m not going to attempt to parse their provisions here. Instead, I’m going to offer some general comments as to what they say and don’t say.



The CBP policy is entitled “Border Search of Electronic Devices Containing Information.” Section 3.4 of the policy defines “border search of information” as excluding “actions taken to determine if a device functions . . . or . . . if contraband is concealed within” it’ The policy therefore applies to border searches the purpose of which is to examine information contained in an electronic device. Section 5.1.2 says that in “a border search, with or without individualized suspicion, an Officer may examine electronic devices and may . . . analyze the information encountered at the border, subject to the requirements and limitations provided herein and applicable law.” The reference to “individualized suspicion” makes it clear that the officer can examine information in a laptop or other electronic device without having “reasonable suspicion” (a lower level of probable cause) to believe there’s evidence of a crime in the device. The search, in other words, will be a routine part of a border inspection. Section 8.1(3) of the ICE policy says that “[a]t no point during a border search of electronic devices is it necessary to ask the traveler for consent to search.” That, of course, is implicit in the border search exception itself.



Section 5.3.1 of the CBP policy says officers can “detain electronic devices, or copies of information contained therein, for a brief, reasonable period of time to perform a thorough” search of the item and/or its contents. It says the detention period normally shouldn’t exceed 5 days, but that can be extended. The CBP policy refers to copying information but doesn’t address when copying is in order; section 8.1(5) of the ICE policy says that the officer “should consider whether it is appropriate to copy the information” in a device and return it to its owner. This section of the ICE policy says that when it’s appropriate, devices should be returned to their owners “as soon as practicable.”



That brings me to the thing I find interesting about both policies. Both carefully outline standards and practices to be employed in detaining (and analyzing) laptops and other electronic devices. Neither seems to address the issue I’ve addressed in at least one post – whether a border agent can detain the person who’s carrying the laptop (or other device) while they examine the device and/or its contents.



When I refer to “detaining” the laptop owner I’m not referring to the brief detention that’s implicit in any border crossing or even the somewhat expanded detention that’s involved in handing an item – including a laptop – to a border officer and waiting for him/her to look it over and hand it back. I’m referring to scenarios like the one I addressed in a recent post, i.e., scenarios in which the agent detains the laptop owner to try to get him to give the officer the key needed to un-encrypt the files on the laptop, or ask him about the laptop and its contents or for other reasons. Both policies seem to assume that if the officer(s) need to keep the device or its contents, they will do so but let the person leave.



I wonder if that’s accurate? Both policies set out procedures under which border agents can have a seized device or seized information sent to experts to have the information at issue translated from another language or decrypted. ICE Policy § 8.4(1)(a). But what about the owner? If you have probable cause or reasonable suspicion to believe there’s contraband or evidence of a crime (terrorism, say) on a laptop, are you really going to let the owner go on his merry way to Dubai or wherever? Alternatively, if the contents of a laptop are encrypted and it will take experts a very long time to decrypt the data, do you just let the laptop owner leave or, as in the Boucher case, do you try to get him or her to give you the encryption key?



Neither policy seems to address the option of simply asking the device owner for the key or password needed to access its encrypted or password-protected contents. I wonder why. Maybe they don’t address this option because they implicitly assume it’s inherent in a border search of an encrypted or password-protected device. That is, maybe the drafters of both policies assumed that the agent conducting such a search can ask the device owner for the encryption key or password without implicating any constitutional provisions.



The agent can certainly ask that question without implicating the provisions of the 4th Amendment; as I’ve noted, the 4th Amendment is about searching and seizing physical evidence, not about asking people questions or compelling them to testify. But as I noted in an earlier post, asking the question could implicate the Miranda principles. It might, therefore, be useful to address the Miranda issue that might arise in connection with a border search of electronic devices, perhaps in another policy.



I’ll probably do another post on these policies, once I’ve had time to review them in more detail and think more about what they say, and don’t say.



My initial reaction to the policies is the same reaction I’ve always had to this scenario. On the one hand, DHS is absolutely right: Taken literally, the border exception clearly applies to any container someone is trying to bring across a U.S. border; the rationale I noted above, i.e., that a sovereign has the right to control what comes into and goes out of its territory, applies with equal force to data stored in a laptop or other device. Data can be contraband, which has always been a primary focus of the border exception; data can also be evidence of a completed crime (espionage) or a prospective crime (planning a terrorist event). The justification for border searches therefore applies with equal logic to electronic containers and their contents.



At the same time, I sympathize with those who’ve argued that laptops, especially, are “different” for the purposes of applying the border search exception. There’s absolutely no equivalence between the magnitude and complexity of the information I carry in my laptop and in my carry-on bag. It seems, on an emotional, intuitive level, that this should somehow matter; I suspect that if asked, most people would say that it should matter.



The problem, of course, is figuring out how to do exactly what the DHS said it’s trying to do in these policies, i.e., reconcile the needs to combat crime/terrorism and to protect privacy. I think the DHS made a conscientious effort to do that in these policies; I also think we may need to revise the border search exception so it can better accommodate the realities of what I call “portable privacy” – the fact that the “papers” which record the details of our personal and professional lives are no longer locked away in our homes or offices. We increasingly carry that information with us; the issue we therefore need to resolve is the extent to which this compromises the 4th Amendment’s protection of it.



Ultimately, though, this may to some extent be a transient problem. As we increasingly store data online, we may not need to carry laptops or other devices when we travel into or out of the United States. Once we get to our destination, we can use a local computer to retrieve, print or use data we’ve stored online. That scenario takes the data entirely outside the scope of the border exception . . . unless we decided to apply the exception to data crossing national borders, as well as people crossing them. I may do a post on that possibility at some point.


Saturday, August 29, 2009

Earthquake

Last Wednesday a federal court of appeals issued an opinion that’s going to have an impact on how law enforcement searches for and seizes electronic evidence.


I called this post “earthquake” because the decision is definitely going to shake things up in this area; whether it’s a 2.0 or a 9.0 on the digital Richter scale depends on how well it’s received by other courts. I’ll return to that issue after I describe and analyze the opinion.


The opinion is U.S. v. Comprehensive Drug Testing, Inc., 2009 WL 2605378 (U.S. Court of Appeals for the Ninth Circuit 2009). It arose from the investigation into steroid use by professional baseball players; in 2002, the Major League Baseball Players and Major League Baseball entered into a collective bargaining agreement that provided for drug testing of all players. Comprehensive Drug Testing (CDT) collected the specimens to be tested; Quest Diagnostics, Inc., performed the actual tests.


In the course of the investigation, federal agents developed probable cause to believe 10 players had tested positive for steroids. They got a warrant authorizing them to search CDT’s facilities for records pertaining to these 10 players. But when they executed the warrant, the agents “seized and promptly reviewed the drug testing records for hundreds of players in Major League Baseball”. U.S. v. Comprehensive Drug Testing, Inc., supra. CDT moved for return of the seized property pursuant to Rule 41(g) of the Federal Rules of Criminal Procedure, and that began a course of litigation that’s lasted for years.


I’m not going to attempt to summarize the motions and rulings and appeals that have gone on in this litigation. If you check out this most recent opinion, you’ll get a good idea of what’s gone on. It looks to me like this opinion is something more than just another round in the battle between CDT and the federal government; it looks to me like the Ninth Circuit Court of Appeals decided to use this case as the occasion to address issues that had presumably been troubling the judges for some time.


I base that conclusion on language in the opinion and on the fact that this opinion was issued by an en banc Court of Appeals; as Wikipedia explains, federal appeals are usually heard by 3 Court of Appeals judges. A federal Court of Appeals can, if a majority of the judges that compose that court so decide, have an appeal heard by a majority of the judges on the court; as Wikipedia explains, en banc appeals in the Ninth Circuit are heard by 11 of the court’s 28 appellate judges. As Wikipedia notes, under federal law en banc proceedings “are disfavored but may be ordered . . . to maintain uniformity of decisions within the circuit or if the issue is exceptionally important.” I’m guessing that both factors prompted the en banc hearing in this case.


What’s extraordinary about this opinion is that after going through all the discrete issues involved in the appeal, the en banc court outlines guidelines officers must follow when they conduct computer searches and seizures: When the government wishes to obtain a warrant to examine a computer hard drive or electronic storage medium in searching for certain incriminating files, or when a search for evidence could result in the seizure of a computer, . . . magistrate judges must be vigilant in observing the guidance we have set out throughout our opinion”. U.S. v. Comprehensive Drug Testing, Inc., supra. The court then summarizes the five principles that constitute this “guidance”:


1. Magistrates should insist that the government waive reliance upon the plain view doctrine in digital evidence cases.


As I explained in an earlier post, the plain view doctrine lets officers seize evidence they observe that is not within the scope of their search warrant but that they observe while searching for evidence that is within the scope of the warrant. The en banc court found that applying the doctrine to digital searches and seizures creates potential for abuse: Officers could seize massive quantities of data on the premise that it includes at least some evidence that is within the scope of their warrant; then, as they go through the data, they can seize (and use) (i) evidence that is within the scope of the warrant and (ii) evidence that is not within the scope of the warrant but that is seizable under the plain view doctrine. The en banc court found that to prevent abuse magistrates who issue digital search warrants must require the government to “forswear reliance on the plain view doctrine or any similar doctrine that would allow it to retain data to which it has gained access only because it was required to segregate seizable from non-seizable data”. U.S. v. Comprehensive Drug Testing, Inc., supra. If the government refuses, the magistrate must “order that the seizable and non-seizable data be separated by an independent third party under the supervision of the court, or deny the warrant”. U.S. v. Comprehensive Drug Testing, Inc., supra.


2. Segregation and redaction must be either done by specialized personnel or an independent third party. If the segregation is to be done by government computer personnel, it must agree in the warrant application that [they] will not disclose to the investigators any information other than that which is the target of the warrant.


This reinforces the issue noted above, i.e., the concern that by seizing a mass of digital evidence investigators can leverage the forensics process to find evidence that is not within the scope of the warrant and as to which they did not have the probable cause needed to obtain another warrant.


3. Warrants . . . must disclose the actual risks of destruction of evidence. . . .


The en banc court found that this is necessary to prevent the government from using “theoretical risks” of data destruction to persuade magistrates to issue search warrants and/or expand the scope of digital search warrants.


4. The government’s search protocol must be designed to uncover only the information for which it has probable cause, and only that information may be examined by the case agents.


Again, the en banc court found that this is necessary to prevent investigators from unconstitutionally expanding the scope of the search that is authorized by a warrant.


5. The government must destroy or, if the recipient may lawfully possess it, return non-responsive data, keeping the issuing magistrate informed about when it has done so and what it has kept.


Here, too, the court is concerned about investigators’ manipulating the warrant process: When . . . the government comes into possession of evidence by circumventing or willfully disregarding limitations in a search warrant, it must not be allowed to benefit from its own wrongdoing by retaining the wrongfully obtained evidence or any fruits thereof.U.S. v. Comprehensive Drug Testing, Inc., supra. The court also noted the need to return non-responsive data under the basic rule I discussed in an earlier post.


That’s just an abbreviated summary of what the en banc court did in this case. What do I think of the opinion? Well, I’m amazed.


I’m not amazed by the concerns the court raises because I, too, share some of those concerns. I’m amazed that a federal Court of Appeals has essentially announced a rule book for digital searches and seizures.


And mostly, I’m wondering if the court has the power to do that. I just did some basic research to see if I could find any cases (or law review articles or treatises) that say a court can require law enforcement officers to waive a 4th Amendment exception in order to obtain a search warrant. I didn’t find anything, which doesn’t surprise me.


It looks to me like what the en banc court has done is analogous to what some federal district courts did a few years ago. Those courts required the government to submit, and to follow, a search protocol whenever it obtain a digital search warrant; the notion was to ensure that the analysis of the seized data (the “search”) didn’t exceed the scope of the warrant itself. We don’t hear much about search protocols any more because a number of federal Courts of Appeals (including the Ninth Circuit) have held that protocols aren’t necessary because the OBJECT of the search serves to narrow the search itself. That is, they said that if agents are looking for child pornography, the fact they’re looking for child pornography is enough to keep the search within the scope of the warrant. (That’s oversimplifying a bit, but this post is already quite long.)


What the Ninth Circuit’s done has the potential to re-ignite a debate that was ranging when a few lower federal courts were requiring protocols. The issue in the debate is the role of the magistrate who issues a search warrant: Is, as the government will argue, the magistrate’s role limited to the essentially clerical process of reviewing a search warrant to see that it’s based on probable cause, specifically describes the place to be searched and the item(s) to be searched for? Or can the magistrate who issues a search warrant use the warrant to impose restrictions on how the government (i) executes the warrant (seizes data) and (ii) analyzes the data once it’s been seized?


The magistrates who were requiring protocols argued that under the 4th Amendment they’re responsible for ensuring that the execution of a search warrant – as well as the issuance of a warrant – complies with the requirements of the 4th Amendment . . . which means that they have the constitutional authority to impose requirements on the government’s execution of a warrant. It looks to me like the en banc Ninth Circuit’s opinion is at least implicitly based on the latter theory.


One final note: The magnitude of the earthquake generated by this opinion will depend on how other courts treat it. In the federal system, district courts are trial courts and the Courts of Appeals are intermediate appellate courts, operating between the district courts and the U.S. Supreme Court. As Wikipedia explains, there are 11 federal Courts of Appeals, each of which covers a specific geographical area. The Ninth Circuit covers California and other Western states; the rulings of the Ninth Circuit Court of Appeals only bind federal district courts in those states. The other federal Courts of Appeals do not HAVE to follow this opinion, nor do federal district courts in states other than those that comprise the Ninth Circuit. The same is true for state trial courts, courts of appeals and state Supreme Courts.


So . . . if a lot of other federal and state courts buy into the Ninth Circuit’s opinion, then the decision is likely to have a major impact on digital search and seizure law. If only a few (or none) buy into it, then the impact will be limited to the courts in the states that comprise the Ninth Circuit.


And then there is that other possibility: The Department of Justice might try to take this issue to the U.S. Supreme Court, in the interests of resolving the issue I noted above once and for all.



Friday, August 28, 2009

Obscene Child Pornography: Two Cases

As I noted in an earlier post, in 2003 Congress created a new child pornography crime: producing, receiving, possessing or manufacturing obscene child pornography. PROTECT Act of 2003, Pub. L. No. 108-21 (2003). The new crime is codified as 18 U.S. Code § 1466A.



It defines obscene child pornography as “a visual depiction of any kind, including a drawing, cartoon, sculpture, or painting,” that depicts (i) a minor engaging in sexually explicit conduct and is obscene (§ 1466A(a)(1) and § 1466A(b)(1)); or (ii) “an image that is, or appears to be, of a minor engaging in graphic bestiality, sadistic or masochistic abuse, or sexual intercourse and lacks serious literary, artistic, political, or scientific value (§ 1466A(a)(2) and § 1466A(b)(2)).


The latter part of the statute is intended to implement the U.S. Supreme Court’s standard for determining what is obscene: In Miller v. California, 413 U.S. 15 (1973), the Court held that to be constitutional under the First Amendment, obscenity statutes must “be limited to works which, taken as a whole, appeal to the prurient interest in sex, which portray sexual conduct in a patently offensive way, and which, taken as a whole, do not have serious literary, artistic, political, or scientific value.” Miller v. California, supra.


In this post, I’m going to review two federal court decisions, one of which held that § 1466A is constitutional and the other of which held that it is not. We’ll start with the case that upheld its constitutionality.


In 2004, Dwight Whorley was charged with 19 counts of violating § 1466A(a)(1) after employees of the Virginia Employment Commission discovered he’d been using a computer in their public resource room to download “Japanese anime-style cartoons of children engaged in explicit sexual conduct with adults.” U.S. v. Whorley, 550 F.3d 326 (U.S. Court of Appeals for the Fourth Circuit 2008). The indictment charged Whorley with “knowingly receiving” the child pornography.


Whorley went to trial and was convicted on all the § 1466A counts. He appealed, arguing, in part, that § 1466A(a)(1) was unconstitutional as applied to the

cartoon drawings that formed the basis for the charges . . . because cartoon figures are not depictions of actual people. He argues that § 1466A(a)(1) necessarily requires that the visual depictions be of actual minors and that if the depiction of an actual minor is not required, then [it] would be unconstitutional on its face under . . . Ashcroft v. Free Speech Coalition, 535 U.S. 234 (2002).

U.S. v. Whorley, supra. As I’ve noted before, in the Ashcroft case the U.S. Supreme Court held that a statute that criminalized the possession of virtual child pornography violated the First Amendment. The Ashcroft Court applied an earlier Supreme Court decision – New York v. Ferber, 458 U.S. 747 (1982) – which had held that real child pornography can be criminalized without violating the First Amendment because while it’s speech, it is speech the creation of which involves the victimization of real children. The Ashcroft Court held that since virtual child pornography does not involve the use of real children and therefore does not “harm” real children, it cannot be criminalized without violating the First Amendment.


Whorley’s first argument was based on the structure of the provisions of § 1466A(a):

Whorley points out that subsection (a)(1) (prohibiting depictions of `a minor engaging in sexually explicit conduct’) is mirrored in subsection (a)(2) (prohibiting `an image that is, or appears to be, of a minor’). . . . He argues that the `appears to be’ language in subsection (a)(2) indicates reference to a real minor in subsection (a)(1). In addition, he contends that subsection (a)(1) prohibits material depicting `sexually explicit conduct,’ which is defined in 18 U.S. Code § 2256 as referring to real people. Section 2256 defines `sexually explicit conduct’ . . . as actual or simulated sexual intercourse, “whether between persons of the same or opposite sex.” 18 U.S. Code § 2256(2)(A).

U.S. v. Whorley, supra. The Court of Appeals didn’t buy his argument:

While § 1466(a)(1) would clearly prohibit an obscene photographic depiction of an actual minor engaging in sexually explicit conduct, it also criminalizes receipt of `a visual depiction of any kind, including a drawing, cartoon, sculpture, or painting,’ that `depicts a minor engaging in sexually explicit conduct’ and is obscene. . . . In addition, Whorley overlooks § 1466A(c), which unambiguously states that `[i]t is not a required element of any offense under this section that the minor depicted actually exist.’ . . . The . . . language is sufficiently broad to prohibit receipt of obscene cartoons. . . .

U.S. v. Whorley, supra. Whorley then tried his First Amendment argument, claiming that if § 1466A(a)(1) did not require that “an actual minor . . . be depicted”, it violated the Supreme Court’s ruling in Ashcroft. As the Court of Appeals noted, there was “no suggestion that the cartoons in this case depict actual children; they were cartoons.” U.S. v. Whorley, supra.



Whorley’s problem was that the Ashcroft Court noted that the First Amendment “does not embrace certain categories of speech, including defamation, incitement, obscenity, and pornography produced with real children.” Ashcroft v. Free Speech Coalition, supra. The Court of Appeals held that § 1466A can be applied to cartoons and other material that does not depict a real child because it is an obscenity statute, not a child pornography statute; to violate § 1466A, the material must be obscene, which means the statute is “a valid restriction on obscene speech” under the Supreme Court’s ruling in Miller v. California.


A few months earlier, a federal district court (a federal trial court, rather than a federal appellate court) reached a different conclusion with regard to one provision of § 1466A. Like Whorley, Christopher Handley was indicted for receiving (§ 1466A(a)) and for possessing (§ 1466A(b)) obscene child pornography in based on his having acquired Japanese anime cartoons that depicted minors engaging in sexually explicit conduct. U.S. v. Handley, 565 F.Supp.2d 996 (U.S. District Court for the Southern District of Iowa 2008). Handley moved to dismiss the charges, arguing that they violated the First Amendment.


One of Handley’s arguments was an Ashcroft argument that was essentially identical to the argument Whorley made. Like Whorley, Handley lost on the argument because the federal district court judge, like the Court of Appeals, held that the statute punishes the receipt and possession of obscene child pornography, not simply child pornography. Since the statute targets obscene material, the court held, it does not violate the First Amendment. U.S. v. Handley, supra.


The Iowa federal judge reached a different conclusion on Handley’s other argument, which was that the subsections of § 1466A differed in terms of the extent to which they required that the material be obscene. Handley had argued that

subsections 1466A(a)(2) and (b)(2) ban virtual child pornography that is not obscene, prohibiting sexually-oriented speech without considering whether it appeals to the prurient interest or is patently offensive. . . . [T]he only element from the three-prong Miller test incorporated into these subsections of § 1466A is that the depiction must lack serious literary, artistic, political, or scientific value.

U.S. v. Handley, supra. The federal judge agreed. He found that §§ 1466A(a)(1) and 1466A(b)(1) “clearly require the material be obscene and the three-prong Miller test would necessarily be incorporated into the essential elements” of the offenses defined by both provisions. U.S. v. Handley, supra.


The language of subsections 1466A(a)(2) and (b)(2) does not require the material be deemed obscene. Instead, those sections merely require that the jury find the material depicts a minor, or what appears to be a minor, engaging in at least one of the acts enunciated in the list of various sexually-explicit conduct contained in subsections 1466A(a)(2)(A) or (b)(2)(A), and that the visual depiction lacks serious literary, artistic, political, or scientific value.


U.S. v. Handley, supra. As I noted earlier, the (a)(2) and (b)(2) subsections only require that the material depict “an image that is, or appears to be, of a minor engaging in graphic bestiality, sadistic or masochistic abuse, or sexual intercourse and lacks serious literary, artistic, political, or scientific value.” The judge in the Handley case held that this did not implement the Miller requirement that the determination of obscenity must focus on three issues:

(a) whether 'the average person, applying contemporary community standards' would find that the work, taken as a whole, appeals to the prurient interest; (b) whether the work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law; and (c) whether the work, taken as a whole, lacks serious literary, artistic, political, or scientific value. . . .

Miller v. California, supra. The judge therefore held that these subsections of § 1466A violate the First Amendment. U.S. v. Handley, supra.


As far as I can tell, the obscene child pornography offense was added to the federal criminal code to give prosecutors an additional option: If they can’t prove that images depict real children, they can still prosecute the person who received, possessed, produced or manufactured them if the images are obscene under Miller. I don’t know why Congress didn’t incorporate the three-pronged Miller test into the statute.


Since Whorley was only charged with receiving obscene child pornography in violation of § 1466A(a)(1), the Miller issue the Handley court addressed did not come up in his case.



Wednesday, August 26, 2009

"Friends" - Part II


This is a follow-up to my last post ("Friends_ - Part I).


In that post (“Friends” – Part I), I reviewed a bar ethic committee’s ruling on the questions submitted by an unidentified attorney.


As I explained, the attorney asked about the ethical permissibility of having a “third person” approach a woman who was going to testify in an upcoming case involving his client. She wasn’t a party to the litigation, but her testimony would help the party on the other side of the litigation; in other words, it would not help the attorney’s client, and would probably damage that party’s position in the litigation.

Here, to refresh your recollection or bring you up to speed if you haven’t read the earlier post, is the scenario this attorney ran by the Pennsylvania Bar ethic committee:

[T]he witness . . . has `Face Book' and `My Space" accounts. . . . The inquirer believes the pages maintained by the witness may contain information . . . [that] could be used to impeach her testimony . . . at trial. The inquirer . . . . has, . . . himself or through agents, attempted to access both accounts. . . . [I]t was found that access to the pages can be obtained only by the witness's permission. . . .


The inquirer proposes to ask a third person, someone whose name the witness will not recognize, to go to the Face Book and My Space websites, contact the witness and seek to `friend' her, to obtain access to the information n the pages. The third person would state only truthful information, for example, his or her true name, but would not reveal that he or she is affiliated with the lawyer or the true purpose for which he or she is seeking access, namely, to provide the information posted on the pages to a lawyer for possible use antagonistic to the witness. If the witness allows access, the third person would then provide the information posted on the pages to the inquirer who would evaluate it for possible use in the litigation.

Pennsylvania Bar Association Committee on Legal Ethics and Professional Responsibility: Pa. Ethics Opinion Number 2009-02 (March 2009).


In my last post I reviewed the specific ethical issues the attorney’s question raised and summarized how the Pennsylvania Bar ethics committee dealt with most, but not all, of them. I said I’d do another post to analyze whether the scenario the attorney outlines would require the suppression of evidence if it were implemented as part of a criminal investigation.


Let’s assume, therefore, that a prosecutor proposes to use the identical tactic to gain access to information on the MySpace and Facebook pages maintained by a woman who is peripherally involved in an ongoing drug distribution operation. Let’s make the analysis more interesting by assuming that the woman – Jane Doe – is married to a man who plays a major role in the operation; Jane knows he’s involved in the drug trade but she, herself, has nothing to do with the operation’s criminal activities. But her husband, John, occasionally tells her things about others involved in the operation.


The prosecutor – Mary Roe – is working with local police on an investigation of the drug ring. They’ve been quite successful in building a case against John Doe and the others who are involved in the drug operation, successful enough that they will be able to bring charges against them based on the evidence they’ve already collected. The prosecutor, though, would like to gain access to Jane’s MySpace and Facebook pages because she suspects Jane has posted comments, photos or other information that could be useful in Roe’s case in chief (the evidence she presents at trial) and/or to impeach (discredit) those who testify for the defense.


Roe therefore has two of the officers (Smith and Jones) involved in the investigation create MySpace (Smith) and Facebook (Jones) accounts, respectively, and use them to contact Jane Doe. Roe instructs Smith and Jones to contact Jane and ask to become her friend on MySpace (Smith) and Facebook (Jones). Let’s assume they do so and Jane Doe befriends both of them. Since I’m trying to keep this scenario as similar to the one I addressed in my earlier post, we’ll assume that all Roe wants the officers to find is information Jane Doe has already posted on her MySpace and Facebook pages. We are not, in other words, assuming they’re going to try to get her to give them new information about her husband’s or others’ involvement in the drug operation.


As I noted in my last post, the Pennsylvania Bar ethics committee found that if the attorney implemented the scenario he inquired about he would be violating certain of the ethical rules that govern the conduct of attorneys. We’ll assume the same holds here, i.e., we’ll assume that Mary Roe would violate the same ethical rules if she had Smith and Jones use MySpace and Facebook, respectively, to befriend Jane Doe and thereby gain access to information that could be used in the investigation and/or at trial. As I noted in my last post, the Pennsylvania Bar ethics committee did not address the final question the attorney posed: whether he could use evidence derived from his implementing the scenario even though it meant he committed ethical violations.


That’s the issue I want to take up here. In declining to decide if the attorney could use the evidence, the Pennsylvania ethics committee said the issue was “beyond the scope” of its responsibility and was “a matter of substantive and evidentiary law to be addressed by the court” if his case went to trial. Pa. Ethics Opinion Number 2009-02 (March 2009). In so doing, the Pennsylvania ethics committee did something similar to how U.S. law once dealt with violations of constitutional rights.


As Wikipedia explains, it wasn’t until 1961 that U.S. law employed the exclusionary rule – the principle that illegally obtained evidence can’t be used at trial – to enforce the 4th Amendment and other constitutional provisions that limit what officers can do in the course of investigating crimes. Until 1961, the default rule in the U.S. was that illegally obtained evidence was admissible as long as it met the requirements imposed by the applicable rules of evidence. The remedy for the officers’ violating the Constitution was to sue the officers, try to have them prosecuted or try to have them disciplined. It looks to me like the Pennsylvania ethics committee did something similar, i.e., it separated the ethics issues from the consequences of violating ethical rules.


That, though, is not relevant to this analysis. We’re assuming Mary Roe operates in a jurisdiction the ethics committee of which takes the same position on the admissibility of evidence obtained from the scenario we’re positing as the Pennsylvania committee. But we’re no longer focusing on the ethical rules; the issue we’re going to analyze is whether what Roe had the officers do violated any constitutional rules and, in so doing, triggered the application of the exclusionary rule.


I think the answer is clearly “no.” The only two rules I can see that could apply here are the 4th and 5th Amendments.


There are several problems with trying to apply the 4th Amendment to this scenario. One is that since Jane Doe posted the information the officers obtained on her MySpace and Facebook pages, and since others had access to that information, I don’t see how she can claim it was “private” under the 4th Amendment. As the Supreme Court said in Katz v. U.S., 389 U.S. 347 (1967), “[w]hat a person knowingly exposes to the public, even in his own home or office, is not a subject of Fourth Amendment protection.” By putting the information online and letting others access her pages, Jane Doe knowingly exposed the information to public view and, in so doing, surrendered 4th Amendment protection of it.


There’s another reason why the 4th Amendment wouldn’t apply, and it goes to the use of what the law calls the “false friend” tactic. As the Supreme Court said in Hoffa v. U.S., 385 U.S. 293 (1966), it has never held that the 4th Amendment “protects a wrongdoer's misplaced belief that a person to whom he voluntarily confides his wrongdoing will not reveal it.” And in U.S. v. Connors, 441 F.3d 527 (U.S. Court of Appeals for the 7th Circuit 2006), a federal appellate court observed, “[w]hen a friend is false, blame the friend, not the government.”


In other words, if Jane Doe argues (i) that she somehow had a cognizable 4th Amendment expectation of privacy in the information she posted on her MySpace and Facebook pages and (ii) that she would not have shared this private information with Smith and Jones if she’d known they were really working for the police and against her husband, she still loses. For the purposes of the 4th Amendment, you trust people at your peril; if Jane Doe was foolish enough to share information with others, she assumed the risk that information would be used against her and/or against her husband.


The Does’ only other option is the 5th Amendment, which clearly doesn’t apply. As I noted in an earlier post, the 5th Amendment privilege against self-incrimination only comes into play when (i) you are compelled (ii) to give testimony (iii) that incriminates you. For the purposes of this analysis, we’ll assume that what was posted on Jane’s MySpace and Facebook pages incriminated her. We’ll also assume that what was posted was “testimony” because some of it, anyway, consisted of messages and other communications.


Jane’s problem, as far as taking the 5th is concerned, is that she wasn’t “compelled” to write and post the information online. The Supreme Court has held that if you voluntarily create documents (and whatever content is posted online is analogous to hardcopy documents), you cannot take the 5th Amendment as to the contents of those documents because you weren't compelled to create them. For the purposes of applying the 5th Amendment, if you voluntarily generate comment (oral testimony or writings like letters or emails), you've waived the 5th Amendment privilege as to that content.


Under the 5th Amendment, “compulsion” requires that a court have ordered you to give testimony, which means you have to testify or be held in contempt and be locked up until you do. No one made Jane Doe post this information online and then share it with Smith and Jones, so the 5th Amendment doesn’t work, either.


Finally, you might wonder about the applicability of the Miranda rules. As I noted in a recent post, Miranda is a construct – a set of prophylactic rules the U.S. Supreme Court made up in order to restrict police officer’s ability to use psychological coercion to get people to confess in police interrogation rooms. For Miranda to apply, you have to be in police “custody” which, as I noted in that post, means you have to either (i) be under arrest or (i) have been restrained by the police in a manner similar to a formal arrest. Since Jane Doe wasn’t in custody when she created the MySpace and Faceebook pages or when she gave Smith and Jones access to them, Miranda doesn’t apply, either.


It looks like the evidence would be admissible, as long as it meets the requirements of the applicable rules of evidence.


Monday, August 24, 2009

"Friends" - Part I

This post is arguably a little off topic. It does deal with conduct online.


But it isn’t about cybercrime, as such. It’s about what I think is a related issue: How much can you trust people you befriend in sites like MySpace and Facebook?


Unlike most of my posts, this one isn’t based on a criminal case. It’s based on an opinion issued by the Pennsylvania Bar Association Committee on Legal Ethics and Professional Responsibility: Pa. Ethics Opinion Number 2009-02 (March 2009).


As Wikipedia explains, legal ethics is based on a set of ethical principles that govern


the conduct of people engaged in the practice of law. In the United States, the American Bar Association has promulgated model rules . . . [that] address the client-lawyer relationship, duties of a lawyer as advocate in adversary proceedings, dealings with persons other than clients . . . and maintaining the integrity of the profession. Respect of client confidences, candor toward the tribunal, truthfulness in statements to others, and professional independence are some of the defining features of legal ethics.

You can find the ABA’s Model Rules of Professional Conduct here, if you’re interested. As an ABA site notes, California is the only state that does not “have professional conduct rules that follow the format of the ABA Model Rules.” California has its own Rules of Professional Conduct, which you can find here, if you’re interested.


The ethics opinion we’re going to focus on issued in response an inquiry a lawyer sent to the Pennsylvania Bar Association Committee on Legal Ethics and Professional Responsibility. The opinion explains what the inquiry was and how it came about:

The inquirer deposed an 18 year old woman (the `witness’). The witness is not a party to the litigation, nor is she represented. Her testimony is helpful to the party adverse to the inquirer's client.

During . . . the deposition, the witness revealed that she has `Face Book’ and `My Space’ accounts. Having such accounts permits a user . . . to create personal `pages’ on which he or she posts information on any topic, sometimes including highly personal information. Access to the pages . . . is limited to persons who obtain the user's permission, which . . . is obtained after the user is approached on line by the person seeking access. The user can grant access to his or her page with almost no information about the person seeking access, or can ask for detailed information about the person seeking access before deciding whether to allow access.

The inquirer believes the pages maintained by the witness may contain information relevant to the matter in which the witness was deposed, and could be used to impeach the witness's testimony should she testify at trial. The inquirer did not ask the witness to reveal the contents of her pages, either by permitting access to them on line or otherwise. He has, however, either himself or through agents, visited Face Book and My Space and attempted to access both accounts. When that was done, it was found that access to the pages can be obtained only by the witness's permission. . . .


The inquirer states that based on what he saw in trying to access the pages, he has determined that the witness tends to allow access to anyone who asks (although it is not clear how he could know that), and states that he does not know if the witness would allow access to him if he asked her directly to do so.

The inquirer proposes to ask a third person, someone whose name the witness will not recognize, to go to the Face Book and My Space websites, contact the witness and seek to `friend’ her, to obtain access to the information on the pages. The third person would state only truthful information, for example, his or her true name, but would not reveal that he or she is affiliated with the lawyer or the true purpose for which he or she is seeking access, namely, to provide the information posted on the pages to a lawyer for possible use antagonistic to the witness. If the witness allows access, the third person would then provide the information posted on the pages to the inquirer who would evaluate it for possible use in the litigation.

The inquirer asks the Committee's view as to whether the proposed course of conduct is permissible under the Rules of Professional Conduct, and whether he may use the information obtained from the pages if access is allowed.

Pa. Ethics Opinion Number 2009-02, supra.


Interesting question, isn’t it? It’s coming up in the context of evidence-gathering in civil litigation but as I’ll note later, I can see it also arising in criminal investigations.


The Bar Committee began by noting that the inquiry implicated several of the state’s Rules of Professional Conduct. The first one it considered is Rule 5.3, which says a lawyer is responsible for the conduct of a non-lawyer who is “employed or retained by or associated with” the lawyer if two circumstances exist: The first circumstance is that the conduct of the non-lawyer would constitute a violation of the Rules of Professional Conduct if the lawyer engaged in it; the other is that the lawyer “order[ed] or, with the knowledge of the specific conduct, ratifie[d] the conduct”. Pa. Ethics Opinion Number 2009-02, supra.


In deciding whether Rule 5.3 applied to the scenario at issue in the lawyer’s inquiry, the Bar Committee said the fact “that the actual interaction with the witness would be undertaken by a third party . . . does not insulate the inquirer from ethical responsibility for the conduct.” Pa. Ethics Opinion Number 2009-02, supra. While the Bar Committee found it could not say the lawyer would be “literally `ordering’” the conduct by the third person the lawyer “plainly” would be “procuring the conduct and, if it were undertaken, would be ratifying it with full knowledge of its propriety or lack thereof”. The Committee therefore found that the inquiring lawyer would be responsible for any ethical violations resulting from the third party’s contact with the witness.


The Bar Committee then considered whether the third party’s contact would violate the state Rules of Professional Conduct. It found, first, that the scenario the lawyer outlined would violate Rule 8.4, which makes it professional misconduct for a lawyer to “engage in conduct involving dishonesty, fraud, deceit or misrepresentation”. Pa. Ethics Opinion Number 2009-02, supra. The Committee found that the scheme the lawyer outlined violated the rule because the third party’s communication with the witness would be

deceptive. It omits a highly material fact, namely, that the third party who asks to be allowed access to the witness's pages is doing so only because he or she is intent on obtaining information and sharing it with a lawyer for use in a lawsuit to impeach the testimony of the witness. The omission would . . . conceal that fact . . . for the purpose of inducing the witness to allow access, when she may not do so if she knew . . . the true purpose . . . was to obtain information for the purpose of impeaching her testimony.

Pa. Ethics Opinion Number 2009-02, supra. The Bar Committee also found that even if

by allowing virtually all would-be `friends onto her FaceBook and MySpace pages, the witness is exposing herself to risks like that in this case, excusing the deceit on that basis would be improper. Deception is deception, regardless of the victim's wariness in her interactions on the internet and susceptibility to being deceived.

Pa. Ethics Opinion Number 2009-02, supra.


Finally, the Bar Committee found that the inquiring lawyer’s proposed tactic would also violate Rule 4.1(a). Rule 4.1(a) makes professional misconduct for a lawyer knowingly to “make a false statement of material fact to a third person”.


Perhaps because he was expecting bad news from the Committee, the lawyer also asked “if he obtained the information in the manner described, he could use it in the litigation.” Pa. Ethics Opinion Number 2009-02, supra. The Bar Committee found this issue was “beyond the scope of its charge” and so did not address it. The Committee noted that if the lawyer disregarded its opinion and used the tactic he inquired about to obtain the information, “whether or not the evidence would be usable either by him or by subsequent counsel in the case is a matter of substantive and evidentiary law to be addressed by the court” in the civil case. Pa. Ethics Opinion Number 2009-02, supra.


I find the lawyer’s last question a little depressing. It implies that he’s willing to risk committing professional misconduct to get evidence if he thinks he can use the evidence to discredit the witness who’ll testify against his client. If that inference is correct, his conduct also implies that he’s not as concerned about avoiding professional misconduct as he is about preserving the fruits of the misconduct (which kind of misses the point).


As I said earlier, I can see the tactic this lawyer inquired about being used in criminal investigations. I’ll address that issue in my next post.