This is a follow-up to my last post, about security.
As I have written elsewhere (I know I keep saying that, but it’s true), our goal is to keep crime on line to manageable proportions, to maintain the necessary baseline of order for cyberspace to function as an analogue of the real-world. In the real-world, we maintain a baseline of order which allows societies to carry out the functions they must if they and their constituents are to survive and prosper. We cannot eliminate real-world crime, but we control it, using the law enforcement strategy I talked about yesterday and have talked about here and elsewhere (yes, again).
We cannot, as I have explained before, use the reactive law enforcement strategy we use for real-world for cybercrime because cybercrime is different. We need a new strategy, one that involves citizens as well as law enforcement. We still retain the traditional, reactive law enforcement strategy but we supplement it with preventative efforts implemented by individuals and entities.
I see cyberspace as analogous to Europe after the fall of Rome. The mechanisms that had maintained the necessary modicum of order in society disappeared, leaving a state of disorder, anarchy. There were no nation-states to maintain order within a demarcated territory; indeed, there were no functional territorial boundaries. Crime control was purely a civilian function; members of communities shared responsibility for apprehending criminals. In medieval England, male adults were required to possess weapons they could use in apprehending and subduing a criminal; the practice was for someone to raise the “hue and cry” when a crime had been committed, after which men in the local community attempted to catch the perpetrator, who would then face certain, rough justice. This model prevailed until the 19th century, when Sir Robert Peel invented the modern police force and eliminated civilian involvement in security.
We need to restore civilian involvement, at least in securing cyberspace. We need a culture change; we need for people to understand that cyberspace is not like the safe, predictable environment many of us inhabit; it is, instead, analogous to the out of control world Europeans confronted after the fall of Rome. It was up to them to take care of themselves, and it is up to those of us who inhabit cyberspace to do the same thing.
I have written extensively about this, but I have not seen it mentioned in the popular press or anywhere else . . . except for the National Strategy to Secure Cyberspace. The White House released the National Strategy in 2003. It calls for civilians – individuals and entities – to assume responsibility for protecting themselves online and thereby helping to prevent cybercrime. It makes this assumption of responsibility a purely voluntary act; there are not consequences if one does not assume responsibility and does not make an effort to prevent cybercrime. Perhaps for that reason, the National Strategy rather quickly disappeared from public view and public discourse.
As I have argued elsewhere , we cannot rely on a voluntary approach to achieve civilian involvement in controlling crime in cyberspace. We need a culture change, and while that might occur on its own if we pursue a voluntary approach, it will take a very long time for the process to be complete. I do not think we have a very long time; I think cybercrime (and cyberterrorism) will only become more pervasive and more destructive, since there is little chance a clever cybercriminal will be apprehended and sanctioned.
I have written extensively about how we can use law, notably criminal law, to jump-start this culture shift. I do not claim to have devised the perfect solution for this problem; all I really want is to bring it into public consciousness and see us making some serious effort to address it.
Saturday, April 29, 2006
Friday, April 28, 2006
Security
“. . . all I could see in London's packed Olympia conference centre was an industry united in a profitable celebration of the failure of our society to properly protect itself from the dangers of living an increasingly online existence.”
Simon Moores, What’s the Point of Security?, Silicon.com (April 26, 2006).
Moores is describing his reaction to the speakers and displays at a recent British computer security conference. As I have noted in earlier posts (e.g., "Treaty," April 16, 2006), I agree with him that our society notably (some might say "criminally") unsuccessful in protecting itself from online dangers. As I have explained elsewhere, our failure is due to our continuing reliance on an outdated model . . . the reactive model of law enforcement we use to control real-world crime. As I have also explained elsewhere, that model is ineffective, at least as our sole crime control methodology, for cybercrime because cybercrime differs in several critical respects from real-world crime, the type of crime the model evolved to control.
I agree with Moores that we are doing a miserable job of protecting ourselves online. And I can understand his reaction to the conference that prompted it -- while I tend to avoid commercial cybersecurity conferences, I, too, have on occasion found myself discouraged by the overt commercialization of efforts to secure our activities online . . . efforts, I might note, that are not proving particularly successful. I tend to have the same reaction to this that I did several years ago, when I went to a Homeland Security Conference in the US . . . and visited the Exhibition Hall where commerical vendors were displaying what I regarded as a parade of horribles: Huge supplies of body bags, portable radiation detectors and protective gear, devices for dealing with the outbreak of hideous, exotice diseases, etc. It was horrible because of the spectres it raised and it was horrible because people were dedicated to profiting from the anticipation (if not the realization) of these spectres.
I differ slightly from Moores in that I believe, as I have explained elsewhere, that a critical first step in changing the current status quo, in improving our ability to protect ourselves online, is effecting a sea change in our culture: We must inculcate the realization that we all -- schools, businesses, religious organizations, individuals, charities, government agencies, etc., etc. -- now bear a significant portion of the responsibility to control online crime. If these commercial events help inculcate that realization, then I think they are accomplishing something . . . aside from enriching the companies that participate.
The problem I see with these events (and analogous events that target only government officials and agencies) is that they do nothing to help the general public realize that they are, in effect, our frontline in controlling cybercrime. One of the currently more exploited tools of cybercrime is the botnet . . . a assemblage of "civilian" computers that have been taken over by cybercriminals and turned into zombies which do the cybercriminals' bidding. Botnets are used for various activities; they are advantageous because of the expanded power they give cybercriminals, and because they serve as an effective buffer between cybercriminal and police. If police track down the source of an attack, they will find the "civilian" computers that constituted the botnet, not the actual perpetrators of the attack.
We desperately need to make the civilians who participate in cyberspace aware of the dangers that lurk there, including the danger (and consequences) of having their computer turned into a botnet. The conferences Moores writes about do nothing to accomplish that, which I see as the real tragedy. I agree with him that commercial motives are so far driving the efforts to develop "civilian" cybersecurity, efforts which are notably unsuccessful. My primary concern, however, is that because these commercial motives focus only on large organizations, the general populace, which is the true Achilles heel of any modern, online society, is going ignored.
Sunday, April 23, 2006
Doing what we couldn't do . . .
In my last post and in many earlier posts, I address various specific issues but in all of them I am really talking about a single theme: Computer technology lets us do things we could never do before: defraud someone on the other side of the world without leaving our armchair; feature our neighbor in violent fantasies we publish online; track someone's movements without having anyone actually follow them, and so on.
Technology lets us do things we could never do before, but law is still focusing on the old ways, on the things we have always been able to do. That is the nature of law -- it tends to be conservative, which is probably a good thing. We do not, after all, want to find ourselves dealing with the "law of the day" -- a statute the legislature threw together in haste to address what seemed a critical, and immediate, new problem.
As I noted in my last post ("Tracking Devices"), our judicial and legislative processes move very slowly, which becomes problematic when technology -- all kinds of technology -- evolves very rapidly. We need to figure out how we can reconcile law-making as a conservative, deliberative process with technological advancements that change the very fabric of society by letting us do things we could never have done fifty or even ten years ago.
How can we do this? Should we revise our law-making processes to, say, implement a "rocket docket" in our judicial systems that speeds cases through the levels of the system more swiftly, the result being that we generate more opinions dealing with the consequences of emerging technology? We could, I am sure, do something similar with our legislative processes, as well.
The problem is that simply speeding up the system would no doubt give us more law, but there is no reason to believe it would give us better law. Emphasizing accelerated law-making would probably give us "laws of the day" (or "laws of the week") . . . hastily assembled legislation or judicial opinions that react to specific issues, instead of articulating broad, flexible standards that have a broader application and therefore a much longer half-life.
IMHO, instead of trying to speed up the law-making process, we need to focus on what laws -- at least criminal laws -- really need to be concerned with. As I have explained elsewhere, laws are devices societies use to maintain order; laws tell us which behaviors are acceptable and which are not. "Civil" laws ensure that various processes -- e.g., traffic flow and the transfer of title to property -- proceed in an organized, predictable manner. "Criminal" rules prevent members of the society from preying on each other, fiscally, physically and emotionally.
Laws are therefore directed at human behavior. Although technology vastly expands the ways in which we can manifest human behavior, I do not think it fundamentally alters the nature of human behavior. If that is true, then it seems to me we can adapt law to changing technologies by focusing on the behaviors we want to encourage or discourage, instead of on the technology. The technology only serves as a vector for a particular behavior; our concern, therefore, is not with outlawing the technology, but with outlawing unacceptable uses of that technology.
How do we decide what is, and is not, an "unacceptable" use of a technology? My field is criminal law, so I shall focus on how this decision should be made with regard to criminalizing certain uses of technology.
Substantive criminal law -- the law that defines offenses -- focuses on a particular "harm." So, rape inflicts the "harm" of forced sexual intercourse, murder inflicts the "harm" of taking one's life, theft inflicts the "harm" of taking someone's property, and so on. If we focus on the "harm," and not on the technology, we stand a better chance of adopting laws that will have a more general applicability. This, after all, is what we have done for millennia; our criminal laws have always been behavior-based, not implement-based.
This all depends, of course, upon whether the range of human behaviors is stable enough that the emergence of new technologies will not significantly expand it. I think it is, and I think there is a correlation between behaviors and "harms."
To understand why I say that, we need to consider why people commit crimes. Basically, I think people commit crimes for two reasons: (i) rational goals; and (ii) passion.
Robbery is a classic example of a rational-goal crime; the goal is to enrich oneself by taking money or other property from someone else. The same is true of most property and white-collar crimes, such as fraud, forgery, blackmail, extortion, embezzlement, bribe-giving and -receiving, etc. It is also true of crimes like drug-dealing, which are not really property crimes but which share the same premise. In all these crimes, the infliction of "harm" on another is the product of a simple rational calculation: the (illicit) transfer of money or property from the victim to me enriches me, which I regard as a desirable outcome. Not surprisingly, most criminal activity in a society consists, and has always consisted, of rational-goal crimes; and that will continue to be true as long as the enhanced possession of wealth is seen as desirable because it gives one access to increased opportunities for pleasure, for status, for travel, for whatever one desires.
I define "passion" crimes more broadly than some. The press tends to use the term "crime of passion" to refer to a crime in which one person killed another in a highly emotional state; a good example of this is the case in Houston several years ago, when a wife ran her husband down after she realized he was still seeing his mistress. I would certainly include that crime, and comparable crimes, in my "passion crime" category. But I would also include the activities of pedophiles, necrophiles, cannibals and others with, shall we say, unconventional sexual drives in that category. I define "passion crimes" as the antonym of rational-goal crimes; I see them as crimes the commission of which results from an emotional calculus, not a rational calculus.
If all of this is true, and crime is the product of a limited range of human motivations, then I think we will tend to see technology used to commit crimes that, ultimately, are very similar to what we have seen historically. Some of this is already evidence: I occasionally see a press story about an "Internet murder," which always refers to an instance in which someone used cyberspace to set up a meeting with a potential victim whom the perpetrator then killed. I don't see this as a cybercrime; I see this as murder, nothing more. The same is true of cyberfraud, cyberextortion, cyberblackmail, etc.
Not all undesirable online activity falls within traditional crime categories, of course. In my posting on "Fantasy" a few weeks ago, I explained how cyberspace lets someone publish fantasies -- explicit sexual or violent fantasies -- online in which they feature, say, a friend, a neighbor or an ex-lover as the victim of the fantasized activity. Imagine this happened to you: Imagine someone was publishing an ongoing series about raping, torturing and/or murdering you, and someone brought the series to your attention. It disturbs you, of course. But what is your recourse? You can try to sue the person responsible for . . . I'm not sure what. It's not really defamation (it's "art") or invasion of privacy or libel. It might constitute infliction of emotional distress, if you are in a jurisdiction that recognizes that cause of action . . . but even if you can sue, do sue and win, it's probably a Pyrrhic victory. The perpetrator probably has no money, so you will be stuck with your legal fees. And the perpetrator may simply transfer the fantasies (and perhaps himself) to another jurisdiction, one in which your civil judgment is irrelevant.
So maybe this is an area in which we need new law. I suspect it will be. If we decide to develop law in this area, we need to focus not on the use of a particular technology but on the infliction of a particular "harm." This, as I noted earlier, is a passion crime. The passion may be to torment the victim, to "control" the victim in a sense or some other emotional calculus that eludes me but that is, in the end, irrelevant. We need to remember our goal: To maintain order in our society by preventing people from inflicting "harm" on others. To do that, we need to craft a rule, a good, general rule, that criminalizes behavior that inflicts this type of non-physical "harm" on someone.
I hope this has made some sense. It's part of something I have actually been thinking a lot about and have written some about. It is, as you can probably tell, still very much a work in progress.
Technology lets us do things we could never do before, but law is still focusing on the old ways, on the things we have always been able to do. That is the nature of law -- it tends to be conservative, which is probably a good thing. We do not, after all, want to find ourselves dealing with the "law of the day" -- a statute the legislature threw together in haste to address what seemed a critical, and immediate, new problem.
As I noted in my last post ("Tracking Devices"), our judicial and legislative processes move very slowly, which becomes problematic when technology -- all kinds of technology -- evolves very rapidly. We need to figure out how we can reconcile law-making as a conservative, deliberative process with technological advancements that change the very fabric of society by letting us do things we could never have done fifty or even ten years ago.
How can we do this? Should we revise our law-making processes to, say, implement a "rocket docket" in our judicial systems that speeds cases through the levels of the system more swiftly, the result being that we generate more opinions dealing with the consequences of emerging technology? We could, I am sure, do something similar with our legislative processes, as well.
The problem is that simply speeding up the system would no doubt give us more law, but there is no reason to believe it would give us better law. Emphasizing accelerated law-making would probably give us "laws of the day" (or "laws of the week") . . . hastily assembled legislation or judicial opinions that react to specific issues, instead of articulating broad, flexible standards that have a broader application and therefore a much longer half-life.
IMHO, instead of trying to speed up the law-making process, we need to focus on what laws -- at least criminal laws -- really need to be concerned with. As I have explained elsewhere, laws are devices societies use to maintain order; laws tell us which behaviors are acceptable and which are not. "Civil" laws ensure that various processes -- e.g., traffic flow and the transfer of title to property -- proceed in an organized, predictable manner. "Criminal" rules prevent members of the society from preying on each other, fiscally, physically and emotionally.
Laws are therefore directed at human behavior. Although technology vastly expands the ways in which we can manifest human behavior, I do not think it fundamentally alters the nature of human behavior. If that is true, then it seems to me we can adapt law to changing technologies by focusing on the behaviors we want to encourage or discourage, instead of on the technology. The technology only serves as a vector for a particular behavior; our concern, therefore, is not with outlawing the technology, but with outlawing unacceptable uses of that technology.
How do we decide what is, and is not, an "unacceptable" use of a technology? My field is criminal law, so I shall focus on how this decision should be made with regard to criminalizing certain uses of technology.
Substantive criminal law -- the law that defines offenses -- focuses on a particular "harm." So, rape inflicts the "harm" of forced sexual intercourse, murder inflicts the "harm" of taking one's life, theft inflicts the "harm" of taking someone's property, and so on. If we focus on the "harm," and not on the technology, we stand a better chance of adopting laws that will have a more general applicability. This, after all, is what we have done for millennia; our criminal laws have always been behavior-based, not implement-based.
This all depends, of course, upon whether the range of human behaviors is stable enough that the emergence of new technologies will not significantly expand it. I think it is, and I think there is a correlation between behaviors and "harms."
To understand why I say that, we need to consider why people commit crimes. Basically, I think people commit crimes for two reasons: (i) rational goals; and (ii) passion.
Robbery is a classic example of a rational-goal crime; the goal is to enrich oneself by taking money or other property from someone else. The same is true of most property and white-collar crimes, such as fraud, forgery, blackmail, extortion, embezzlement, bribe-giving and -receiving, etc. It is also true of crimes like drug-dealing, which are not really property crimes but which share the same premise. In all these crimes, the infliction of "harm" on another is the product of a simple rational calculation: the (illicit) transfer of money or property from the victim to me enriches me, which I regard as a desirable outcome. Not surprisingly, most criminal activity in a society consists, and has always consisted, of rational-goal crimes; and that will continue to be true as long as the enhanced possession of wealth is seen as desirable because it gives one access to increased opportunities for pleasure, for status, for travel, for whatever one desires.
I define "passion" crimes more broadly than some. The press tends to use the term "crime of passion" to refer to a crime in which one person killed another in a highly emotional state; a good example of this is the case in Houston several years ago, when a wife ran her husband down after she realized he was still seeing his mistress. I would certainly include that crime, and comparable crimes, in my "passion crime" category. But I would also include the activities of pedophiles, necrophiles, cannibals and others with, shall we say, unconventional sexual drives in that category. I define "passion crimes" as the antonym of rational-goal crimes; I see them as crimes the commission of which results from an emotional calculus, not a rational calculus.
If all of this is true, and crime is the product of a limited range of human motivations, then I think we will tend to see technology used to commit crimes that, ultimately, are very similar to what we have seen historically. Some of this is already evidence: I occasionally see a press story about an "Internet murder," which always refers to an instance in which someone used cyberspace to set up a meeting with a potential victim whom the perpetrator then killed. I don't see this as a cybercrime; I see this as murder, nothing more. The same is true of cyberfraud, cyberextortion, cyberblackmail, etc.
Not all undesirable online activity falls within traditional crime categories, of course. In my posting on "Fantasy" a few weeks ago, I explained how cyberspace lets someone publish fantasies -- explicit sexual or violent fantasies -- online in which they feature, say, a friend, a neighbor or an ex-lover as the victim of the fantasized activity. Imagine this happened to you: Imagine someone was publishing an ongoing series about raping, torturing and/or murdering you, and someone brought the series to your attention. It disturbs you, of course. But what is your recourse? You can try to sue the person responsible for . . . I'm not sure what. It's not really defamation (it's "art") or invasion of privacy or libel. It might constitute infliction of emotional distress, if you are in a jurisdiction that recognizes that cause of action . . . but even if you can sue, do sue and win, it's probably a Pyrrhic victory. The perpetrator probably has no money, so you will be stuck with your legal fees. And the perpetrator may simply transfer the fantasies (and perhaps himself) to another jurisdiction, one in which your civil judgment is irrelevant.
So maybe this is an area in which we need new law. I suspect it will be. If we decide to develop law in this area, we need to focus not on the use of a particular technology but on the infliction of a particular "harm." This, as I noted earlier, is a passion crime. The passion may be to torment the victim, to "control" the victim in a sense or some other emotional calculus that eludes me but that is, in the end, irrelevant. We need to remember our goal: To maintain order in our society by preventing people from inflicting "harm" on others. To do that, we need to craft a rule, a good, general rule, that criminalizes behavior that inflicts this type of non-physical "harm" on someone.
I hope this has made some sense. It's part of something I have actually been thinking a lot about and have written some about. It is, as you can probably tell, still very much a work in progress.
Tracking devices
The Fourth Amendment is the constitutional provision that protects citizens from having their privacy arbitrarily invaded by the government. The Fourth Amendment requires the government to get a warrant or invoke an exception to the warrant requirement before it can invade your privacy by, say, searching your home or office.
In my posts "Cartapping" (February 12, 2006) and "Can You Trust Your Car?" (April 19, 2006), I talked about the extent to which the Fourth Amendment applies to the government's using technology installed in your vehicle to eavesdrop on what you say while in the vehicle.
In this post I want to talk about something different: whether the Fourth Amendment applies to the government's using computer technology to track your movements in public areas. Until relatively recently, the only way the government could do this was to have police officers follow someone, and the Supreme Court has held that following someone is not a "search" under the Fourth Amendment. Searches invade a reasonable expectation of privacy, and it is simply not "reasonable" to say that my driving down city streets or on a highway is "private," since anyone who happens to be in the area, or who is inclined to follow me, can where I am and infer where I am going. And in United States v. Knotts, 460 U.S. 276 (1983), the Supreme Court held that it was not a "search" for law enforcement officers to use a beeper installed in a vat of chemicals to follow a car; the vat was in the car, and the signal it transmitted helped the officers to follow the car to its final destination. All the beeper did was to send out an audible signal that became stronger when the officers were closer to the car and weaker as they fell behind.
Beepers have become antiques. Today, police use one of two techniques to track someone's movements:
These tracking techniques illustrate a major problem we are facing with regard to privacy: How do we maintain the balance between privacy and legitimate law enforcement activity in the face of rapidly-evolving technology?
As I noted above, the only Supreme Court case on point for the use of these tracking techniques is Knotts . . . a 23-year-old decision that dealt with comparatively primitive technology. We do have, as I also noted, a number of decades-old federal statutes that establish processes agents must use to, for example, have a telephone company install a device that captures the numbers dialed from a phone, but they really do not apply to the use of cell phone GPS technology.
Nor is it clear whether the installation of a GPS tracking device on a vehicle is constitutional under Knotts. As I said, the use of such a device clearly results in the collection of information that far exceeds what a typical police department could accomplish by using human resources. Courts are struggling with whether that takes the use of a GPS tracking device out of the holding in Knotts and transforms it into a Fourth Amendment "search" that can only be conducted with a warrant.
So, what should we do? How should we resolve these issues?
Traditionally, we would (a) wait until the issue had made its way through the lower courts to the Supreme Court, which would issue a definitive opinion; and/or (b) adopt legislation that dealt with the problem. (Congress has, in this general area, tended to adopt statutes that implement and sometimes exceed the requirements of the Fourth Amendment.)
There are two problems with following this traditional approach in an era of rapidly-evolving technology:
In my posts "Cartapping" (February 12, 2006) and "Can You Trust Your Car?" (April 19, 2006), I talked about the extent to which the Fourth Amendment applies to the government's using technology installed in your vehicle to eavesdrop on what you say while in the vehicle.
In this post I want to talk about something different: whether the Fourth Amendment applies to the government's using computer technology to track your movements in public areas. Until relatively recently, the only way the government could do this was to have police officers follow someone, and the Supreme Court has held that following someone is not a "search" under the Fourth Amendment. Searches invade a reasonable expectation of privacy, and it is simply not "reasonable" to say that my driving down city streets or on a highway is "private," since anyone who happens to be in the area, or who is inclined to follow me, can where I am and infer where I am going. And in United States v. Knotts, 460 U.S. 276 (1983), the Supreme Court held that it was not a "search" for law enforcement officers to use a beeper installed in a vat of chemicals to follow a car; the vat was in the car, and the signal it transmitted helped the officers to follow the car to its final destination. All the beeper did was to send out an audible signal that became stronger when the officers were closer to the car and weaker as they fell behind.
Beepers have become antiques. Today, police use one of two techniques to track someone's movements:
- Use an individual's cell phone to track her movements: If the cell phone is on (and maybe even if it is not), the cellular phone service provider can tell where the person carrying the cell phone is. This can be done in two ways: The older method is to use signals from cell phone towers to identify where a particular cell phone is located; cell phones continually send out registration messages to cell phone towers in the area. It is possible, using a technique called triangulation, to use these messages to pinpiint the location of a specific cell phone, and track its movements. The newer method is to use GPS receivers installed in the cell phone; several years ago, the Federal Communications Commission mandated that, by the end of 2005, new cell phones have GPS technology installed. The purpose was to make it easier to find someone who had been injured in say, a car accident, and could call for help but could not explain where he was.
- Install a GPS tracking device on someone's vehicle and use it to track her movements: The tracking devices are small, and can easily be installed on a vehicle without the owner's knowing it. Unlike the beeper at issue in Knotts, they do more than simply send out a signal that helps humans follow a vehicle. GPS devices track a vehicle's movements automatically, sending the information to a receiving unit in a police station or other central facility. This means, of course, that no officer actually has to follow the vehicle; the GPS device automates the process. It also means, as some courts have noted, that the process of tracking the vehicle is vastly improved; the GPS device tracks the vehicle's movements on an uninterrupted 24/7 basis for as long as it is installed . . . for weeks, say. As some judges have noted, this type of tracking is realistically impossible for law enforcement agencies with limited resources.
These tracking techniques illustrate a major problem we are facing with regard to privacy: How do we maintain the balance between privacy and legitimate law enforcement activity in the face of rapidly-evolving technology?
As I noted above, the only Supreme Court case on point for the use of these tracking techniques is Knotts . . . a 23-year-old decision that dealt with comparatively primitive technology. We do have, as I also noted, a number of decades-old federal statutes that establish processes agents must use to, for example, have a telephone company install a device that captures the numbers dialed from a phone, but they really do not apply to the use of cell phone GPS technology.
Nor is it clear whether the installation of a GPS tracking device on a vehicle is constitutional under Knotts. As I said, the use of such a device clearly results in the collection of information that far exceeds what a typical police department could accomplish by using human resources. Courts are struggling with whether that takes the use of a GPS tracking device out of the holding in Knotts and transforms it into a Fourth Amendment "search" that can only be conducted with a warrant.
So, what should we do? How should we resolve these issues?
Traditionally, we would (a) wait until the issue had made its way through the lower courts to the Supreme Court, which would issue a definitive opinion; and/or (b) adopt legislation that dealt with the problem. (Congress has, in this general area, tended to adopt statutes that implement and sometimes exceed the requirements of the Fourth Amendment.)
There are two problems with following this traditional approach in an era of rapidly-evolving technology:
- It can take forever for a case to make its way to the Supreme Court, be argued, and then decided. (And this Supreme Court takes very few cases -- roughly 75 a term, I believe.) If that decision enunciates a broad standard, then that standard can be extrapolated to help us deal with issues other than the specific issue (and technology) that went to the Court. But if the Court issues a very limited decision, that decision, and this whole process, will be of little help as we attempt to sort out the rapidly emerging legal issues generated by new technologies. The Court did precisely this, i.e., issued a very limited decison, in Kyllo v. United States, its 2001 pronouncement on the Fourth Amendment's applicability to law enforcement use of technology. In Kyllo, the Court was asked to decide if the use of a thermal imager to detect heat emanating from a structure is a Fourth Amendment "search." In a majority opinion written by Justice Scalia, 5 Justices said it was. More precisely, they said it is a "search" (i) to use technology that is not in general public use to (ii) detect information from inside a home, information an officer could not get otherwise except by going into the home. This holding is limited and inherently ambiguous (what happens when technology is in general public use? what happens if it's not a home?) . . . which means it is of little assistance in sorting out issues generated by law enforcement's use of evolving technologies. Unless the Supreme Court changes its approach to deciding cases like Kyllo, this alternative is not likely to be particularly helpful in resolving the dilemma I am writing about today.
- It can take a very long time (maybe not forever) for a legislature (Congress or a state legislature) to adopt statutes that address issues such as the cell phone or GPS tracking. And when a legislature does act, it tends to adopt technologically-specific legislation . . . like the statute I mentioned above, the one that governs the use of a device that captures the numbers dialed on a traditional landline phone. This, of course, means that the statute may well be out of date by the time it goes into effect.
Thursday, April 20, 2006
State-sponsored crime
This is Klaus Fuchs. During the 1940's, he gave the Soviet Union, a US-British ally, information about the United States and British efforts to develop nuclear weapons. Fuchs' efforts finally came to light, and in 1950 he was convicted of espionage -- supplying military secrets to a country with which neither the U.S. nor Britain was, or had been, at war.
Basically, treason consists of giving "aid and comfort" to the enemies of the United States. Fuchs could not be convicted of treason because the U.S. was not at war with the Soviet Union when he passed on its nuclear secrets; indeed, for much of the period, the U.S. and the Soviet Union were allies in the struggle against the Axis powers.
Espionage is similar to treason, in that it also involves collecting evidence which a country wants to keep secret.
In 1951, Julius and Ethel Rosenberg were convicted of espionage for transmitting "information relating to the national defense" to a foreign government -- the Soviet Union (again). Like Fuchs, they supplied information about the U.S.' nuclear weapons program; like Fuchs, their convictions were predicated on a traditional form of espionage, one that involved information that could be used to gain tactical advantage in the case of an armed conflict between two nations. Espionage offenses were historically a derivative form of treason.
In 1996, the U.S. adopted the Economic Espionage Act (18 U.S. Code sectons 1831-1839), which expanded the concept of espionage to include the surreptitious gathering of information that could be used to gain economic, rather than military, advantage. The Act is unique; not only do other countries lack such legislation, but many countries actively engage in economic espionage. This includes countries that are otherwise allies of the United States, such as France and Israel; each year, a report is submitted to Congress which documents the extent of these activities.
The Economic Espionage Act was intended to combat these activities by criminalizing them. It prohibits the theft of "trade secrets," which are defined as a "formula, practice, process, design, instrument, pattern, or compilation of information used by a business to obtain an advantage over competitors within the same industry or profession." Unlike treason or conventional espionage, economic espionage focuses on "civilian" information; it is predicated on the recognition that countries compete economically as well as militarily. Indeed, many argue that we are currently engaged in economic warfare with various countries, including China.
I discuss this and other aspects of economic espionage in a law review article this will soon be published by the Houston Journal of International Law. The article should be available online at their website. If you want to read more about this, I suggest you read the article ("State-Sponsored Crime: The Futility of the Economic Espionage Act") . . . which should be online soon.
The Economic Espionage Act creates two distinct crimes: 18 U.S. Code section 1831 criminalizes "economic espionage," which consists of stealing U.S. trade secrets in order to benefit a foreign government. So, a section 1831 offense occurs when, say, an Israeli agent steals confidential proprietary information from a U.S. drug company and transmits that information to sources in Israel, the goal being to improve Israel's ability to compete in this area. 18 U.S. Code section 1832 makes the theft of trade secrets a crime; it focuses on domestic activities, thefts that are intended to benefit individuals or entities within the United States. A section 1832 offense would occur if, say, research scientists working for Company A stole secret proprietary information from that company and used it to open their own, rival company.
Economic espionage, the type of activity criminalized by 18 U.S. Code section 1831, is the more serious of the two for at least two reasons:
The futility of pursuing criminal prosecution becomes even more evidence when economic espionage is conducted remotely . . . when the agent of the foreign government hacks into a U.S. business' computer system, extracts data containing proprietary information and downloads it to a computer in the foreign country. Here, the U.S. has absolutely no chance of apprehending the perpetrator while she is conducting her nefarious activities. Its only chance to pursue criminal prosecution depends upon the agent's own country's being willing to surrender her for prosectution which, again, is extremely unlikely.
It is unlikely because the agent was, after all, operating on behalf of the foreign government; it is therefore as unlikely that the foreign government would give this civilian spy up to be prosecuted as it is that the U.S. would surrender a CIA agent who had been operating covertly in another country to be prosecuted for espionage by that country.
It is also unlikely because, as I noted at the outset of this post, economic espionage is not regarded as a crime in most countries. It is a basic principle of international law that countries will not, and do not have to, surrender their citizens to be prosecuted in Country X for activity they conducted while they were in their own country and that was legal in their own country.
This is a very long post, and this is a very complex issue. I think I will come back to it again, in another post. In the interim, you might want to check out that article.
Basically, treason consists of giving "aid and comfort" to the enemies of the United States. Fuchs could not be convicted of treason because the U.S. was not at war with the Soviet Union when he passed on its nuclear secrets; indeed, for much of the period, the U.S. and the Soviet Union were allies in the struggle against the Axis powers.
Espionage is similar to treason, in that it also involves collecting evidence which a country wants to keep secret.
In 1951, Julius and Ethel Rosenberg were convicted of espionage for transmitting "information relating to the national defense" to a foreign government -- the Soviet Union (again). Like Fuchs, they supplied information about the U.S.' nuclear weapons program; like Fuchs, their convictions were predicated on a traditional form of espionage, one that involved information that could be used to gain tactical advantage in the case of an armed conflict between two nations. Espionage offenses were historically a derivative form of treason.
In 1996, the U.S. adopted the Economic Espionage Act (18 U.S. Code sectons 1831-1839), which expanded the concept of espionage to include the surreptitious gathering of information that could be used to gain economic, rather than military, advantage. The Act is unique; not only do other countries lack such legislation, but many countries actively engage in economic espionage. This includes countries that are otherwise allies of the United States, such as France and Israel; each year, a report is submitted to Congress which documents the extent of these activities.
The Economic Espionage Act was intended to combat these activities by criminalizing them. It prohibits the theft of "trade secrets," which are defined as a "formula, practice, process, design, instrument, pattern, or compilation of information used by a business to obtain an advantage over competitors within the same industry or profession." Unlike treason or conventional espionage, economic espionage focuses on "civilian" information; it is predicated on the recognition that countries compete economically as well as militarily. Indeed, many argue that we are currently engaged in economic warfare with various countries, including China.
I discuss this and other aspects of economic espionage in a law review article this will soon be published by the Houston Journal of International Law. The article should be available online at their website. If you want to read more about this, I suggest you read the article ("State-Sponsored Crime: The Futility of the Economic Espionage Act") . . . which should be online soon.
The Economic Espionage Act creates two distinct crimes: 18 U.S. Code section 1831 criminalizes "economic espionage," which consists of stealing U.S. trade secrets in order to benefit a foreign government. So, a section 1831 offense occurs when, say, an Israeli agent steals confidential proprietary information from a U.S. drug company and transmits that information to sources in Israel, the goal being to improve Israel's ability to compete in this area. 18 U.S. Code section 1832 makes the theft of trade secrets a crime; it focuses on domestic activities, thefts that are intended to benefit individuals or entities within the United States. A section 1832 offense would occur if, say, research scientists working for Company A stole secret proprietary information from that company and used it to open their own, rival company.
Economic espionage, the type of activity criminalized by 18 U.S. Code section 1831, is the more serious of the two for at least two reasons:
- It results in the transfer of proprietary information to a foreign power, which erodes the U.S.' ability to compete in the global marketplace. The U.S. loses a tactical advantage in the evolving economic war among nations, just as it lost a tactical advantage to the Soviet Union when Fuchs and the Rosenbergs transmitted nuclear secrets to agents of that country. The point here is that economic espionage directly damages the country, while the theft of trade secrets generally damages a company.
- It is MUCH more difficult to control. In 1996, the U.S. decided that stealing trade secrets, or economic espionage, was of such significance that it warranted creating new criminal offenses -- criminal prosecution being the traditional means we use to control undesirable behaviors. As I have explained elsewhere, however, criminal prosecution is effective only against traditional, real-world crime. The domestic offense the Act created -- the theft of trade secrects -- is sufficiently analogous to real-world crime that criminal prosecution may be an effective means of dealing with it. (Though even here I have reservations, for reasons I explain in the forthcoming article I noted above). The economic espionage offense is very different, however, for several reasons . . . the most important of which being that it is state-sponsored crime.
The futility of pursuing criminal prosecution becomes even more evidence when economic espionage is conducted remotely . . . when the agent of the foreign government hacks into a U.S. business' computer system, extracts data containing proprietary information and downloads it to a computer in the foreign country. Here, the U.S. has absolutely no chance of apprehending the perpetrator while she is conducting her nefarious activities. Its only chance to pursue criminal prosecution depends upon the agent's own country's being willing to surrender her for prosectution which, again, is extremely unlikely.
It is unlikely because the agent was, after all, operating on behalf of the foreign government; it is therefore as unlikely that the foreign government would give this civilian spy up to be prosecuted as it is that the U.S. would surrender a CIA agent who had been operating covertly in another country to be prosecuted for espionage by that country.
It is also unlikely because, as I noted at the outset of this post, economic espionage is not regarded as a crime in most countries. It is a basic principle of international law that countries will not, and do not have to, surrender their citizens to be prosecuted in Country X for activity they conducted while they were in their own country and that was legal in their own country.
This is a very long post, and this is a very complex issue. I think I will come back to it again, in another post. In the interim, you might want to check out that article.
Wednesday, April 19, 2006
Can you trust your car?
This post is essentially a fusion of the ideas I threw out in my post on "Cartapping" (February 12, 2006) and the 1996 paper, Information Terrorism: Can You Trust Your Toaster?, written by Matthew G. Devost, Brian K. Houghton & Neal A. Pollard.
In my cartapping post, I explained how the FBI had used a cellular connection that was a component of an emergency services system -- analogous to if not precisely the GM OnStar system -- to eavesdrop on conversations held in a car. My point there was how embedded environmental technology can be deliberately exploited by law enforcement for evidence-gathering purposes. The greater issue, of course, is how technology can, and will, erode our privacy IF we cling to what I call a bricks-and-mortar conception of privacy, i.e., a conception of privacy which says that if I do not use physical barriers to shield my activities from law enforcement scrutiny, then they are not "private" under the Fourth Amendment.
(As I've explained before, if something is "private" under the Fourth Amendment, then law enforcement officers have to satisfy the Amendment's requirements by getting a search warrant or relying on an exception to the search warrant requirement before they eavesdrop or conduct other invasions of privacy. If something is not "private" under the Fourth Amendment, then they do not need to rely on a warrant or an exception -- the person who did not maintain the privacy of his or her activities bears the risk that law enforcement will scruntinize them.)
So, "Cartapping" was about how law enforcement can deliberately exploit technology embedded in our environments. The DeVost article is about how embedded technologies can be exploited by terrorists and others who wish to do us harm . . . hence, the issue of regarding one's toaster with a level of distrust.
My post and the DeVost article are both about how embedded technology -- technology we take for granted and so ignore -- can be exploited to (i) cause direct physical harm to citizens or (ii) to inflict a more indirect harm by subjecting them to law enforcement scrutiny without their knowledge or consent. Both are about direct, positive action by directed at a target . . . a target of terrorists for the authors of the DeVost article and a target of law enforcement for my "Cartapping" post.
A relatively recent news story highlights an additional, and equally interesting possibility: Ralph Gomez of St. Augustine, Florida, bought a new Cadillac and was showing the car and its OnStar system off to his girlfriend. Something went horribly awry -- the OnStar operator for some reason tried to contact Gomez, but the volume on his OnStar was set so low he couldn't hear the operator calling him. Concerned (and no doubt following standard operating procedure), the operator called police, who stopped Gomez' car to see if there was any emergency.
There was no emergency . . . but there was, according to the wire story, cocaine in plain view on the car's console. That resulted in Gomez' being arrested for illegal drug possession AND his car and cash he had in the car's being seized, presumably for forfeiture.
I find this case an very interesting twist on the issue the DeVost authors and I both raised, i.e., the deliberate exploitation of technology to the disadvantage of a citizen (investigation) or citizens (terrorism). Here, no one deliberately exploited the OnStar system. Instead of being highjacked for law enforcement eavesdropping or used for terrorism, it functioned precisely as it was intended to . . . and, in the course of doing so, ratted out Mr. Gomez.
So, can you trust your car?
In my cartapping post, I explained how the FBI had used a cellular connection that was a component of an emergency services system -- analogous to if not precisely the GM OnStar system -- to eavesdrop on conversations held in a car. My point there was how embedded environmental technology can be deliberately exploited by law enforcement for evidence-gathering purposes. The greater issue, of course, is how technology can, and will, erode our privacy IF we cling to what I call a bricks-and-mortar conception of privacy, i.e., a conception of privacy which says that if I do not use physical barriers to shield my activities from law enforcement scrutiny, then they are not "private" under the Fourth Amendment.
(As I've explained before, if something is "private" under the Fourth Amendment, then law enforcement officers have to satisfy the Amendment's requirements by getting a search warrant or relying on an exception to the search warrant requirement before they eavesdrop or conduct other invasions of privacy. If something is not "private" under the Fourth Amendment, then they do not need to rely on a warrant or an exception -- the person who did not maintain the privacy of his or her activities bears the risk that law enforcement will scruntinize them.)
So, "Cartapping" was about how law enforcement can deliberately exploit technology embedded in our environments. The DeVost article is about how embedded technologies can be exploited by terrorists and others who wish to do us harm . . . hence, the issue of regarding one's toaster with a level of distrust.
My post and the DeVost article are both about how embedded technology -- technology we take for granted and so ignore -- can be exploited to (i) cause direct physical harm to citizens or (ii) to inflict a more indirect harm by subjecting them to law enforcement scrutiny without their knowledge or consent. Both are about direct, positive action by directed at a target . . . a target of terrorists for the authors of the DeVost article and a target of law enforcement for my "Cartapping" post.
A relatively recent news story highlights an additional, and equally interesting possibility: Ralph Gomez of St. Augustine, Florida, bought a new Cadillac and was showing the car and its OnStar system off to his girlfriend. Something went horribly awry -- the OnStar operator for some reason tried to contact Gomez, but the volume on his OnStar was set so low he couldn't hear the operator calling him. Concerned (and no doubt following standard operating procedure), the operator called police, who stopped Gomez' car to see if there was any emergency.
There was no emergency . . . but there was, according to the wire story, cocaine in plain view on the car's console. That resulted in Gomez' being arrested for illegal drug possession AND his car and cash he had in the car's being seized, presumably for forfeiture.
I find this case an very interesting twist on the issue the DeVost authors and I both raised, i.e., the deliberate exploitation of technology to the disadvantage of a citizen (investigation) or citizens (terrorism). Here, no one deliberately exploited the OnStar system. Instead of being highjacked for law enforcement eavesdropping or used for terrorism, it functioned precisely as it was intended to . . . and, in the course of doing so, ratted out Mr. Gomez.
So, can you trust your car?
Sunday, April 16, 2006
Treaty
As I have explained elsewhere, the major problem law enforcement faces in dealing with cybercrime is the lack of cybercrime laws in some countries and the inconsistencies that exist between cybercrime laws in other countries.
Cybercriminals can, and are, exploiting these gaps and inconsistencies to their advantage: If there is no law criminalizing, , say, the dissemination of a computer virus, then the person responsible for the virus cannot be prosecuted in his home country and cannot be extradited to be prosecuted in other countries harmed by the virus. (It is a basic principle of international law that someone cannot be handed over by Country X to Country Z for prosecution unless the conduct at issue was a crime both in Country X and Country Z; this is known as the principle of "double criminality".)
Other problems arise in the investigation of cybercrimes. Basically, under international law, Country X is not obligated to assist Country Z with the investigation of a crime committed in Country Z unless there is an agreement -- a mutual legal assistance treaty -- in effect between the two. (There are other methods by which Country Z can request assistance from Country X, but they are cumbersome and time-consuming.) Cybercriminals can exploit the lack of a treaty between two countries: A cybercriminal can set up operations in Country Z and victimize citizens of Country X, knowing that the authorities in Country Z cannot assist police from Country X in their investigation of these cybercrimes. This is a very simple example, but I hope it makes the point.
In an effort to address this problem, the Council of Europe created a committee and assigned it the task of drafting a cybercrime treaty. After some years of work, the committee produced the Convention on Cybercrime. The Convention is a lengthy document, the goal of which is to harmonize the national penal law (the law governing the definition of criminal offenses) and procedural law (the law governing criminal investigations) that deals with cybercrime. Countries that sign and ratify the Convention (a country must do both to be bound to implement the treaty) pledge to ensure that (i) their law criminalizes a baseline of cybercrime offenses, (ii) their law allows them to assist other parties to the Convention with the investigation of cybercrimes and to extradite cybercriminals in their custody and (iii) their law allows them to provide other mutual assistance to countries in the investigation and prosecution of cybercrime.
I think the Convention on Cybercrime is a very impressive document. And it seems the logical solution to the problems I noted above.
Why then, I wonder, has it been ratified by so few countries? The Convention as opened for signature on November 23, 2001. As I write this, approximately four and a half years later, it has been signed by 42 countries but only ratified by 13. The Convention does not become binding on a country until it signed and ratifies it.
Until this year, the Convention had not been ratified by any of the major European countries. I t had been ratified by smaller countries, such as Albania and Croatia, but not by the major players in Europe, the countries one would expect to have been among the first to ratify the Convention. France and Denmark finally ratified the Convention this year, but the Italy, Spain, Belgium the United Kingdom and a number of other countries still have not ratified it.
The Convention is open to non-European countries under certain conditions, one being that they were involved in its drafting. Four non-European countries -- the United States, Canada, Japan and South Africa -- signed the Convention under this condition. None of them have ratified it.
This is particularly surprising with regard to the United States, because the U.S. Department of Justice was a prime mover in the creation and drafting of the Convention on Cybercrime. The US is a major target of cybercriminals, and therefore has good reason to want global cybercrime law to become a seamless web that facilitates the investigation and prosecution of cyber-perpetrators. Indeed, the U.S. Department of Justice has for years conducted programs for countries in Asia and South America; the programs are intended to encourage them to sign and ratify the Convention by explaining the benefits of doing so and providing assistance with the legal issues involved in adopting the legislation required to implement the Convention.
So, why is the Convention languishing? I don't know. I don't know why we have not ratified it, given the effort we put into its creation. The President recommended ratification to the Senate almost two years ago, and the Senate Foreign Relations Committee recommended ratification last summer. I can only assume our failure to ratify is due, in part, to the fact that the White House is and has for some time been occupied with other matters (Iraq, Al Qaeda, Katrina, etc.). I suspect it is also due to the fact that several entities -- including the ACLU, the EFF and EPIC -- oppose ratification, on the grounds that certain provisions of the Convention are inconsistent with the civil liberties guaranteed by our Constitution.
I also wonder if the general dereliction of duty with regard to the Convention is due to the same phenomenon that happens to most of us at some point in time . . . you have to fix something around the house, fixing it will be a pain, you don't really want to do it but you go out and buy the materials you need to do the job. Then they sit . . . because you really don't want to deal with the problem . . . and you have, after all, taken the first step by picking up the materials you need.
Maybe the Convention on Cybercrime is languishing because those who care about the issues it addresses worked very hard to get the Convention drafted . . . and are now assuming it will go into effect, somewhen, and take care of the problem.
(Image courtesty of the Council of Europe.)
Cybercriminals can, and are, exploiting these gaps and inconsistencies to their advantage: If there is no law criminalizing, , say, the dissemination of a computer virus, then the person responsible for the virus cannot be prosecuted in his home country and cannot be extradited to be prosecuted in other countries harmed by the virus. (It is a basic principle of international law that someone cannot be handed over by Country X to Country Z for prosecution unless the conduct at issue was a crime both in Country X and Country Z; this is known as the principle of "double criminality".)
Other problems arise in the investigation of cybercrimes. Basically, under international law, Country X is not obligated to assist Country Z with the investigation of a crime committed in Country Z unless there is an agreement -- a mutual legal assistance treaty -- in effect between the two. (There are other methods by which Country Z can request assistance from Country X, but they are cumbersome and time-consuming.) Cybercriminals can exploit the lack of a treaty between two countries: A cybercriminal can set up operations in Country Z and victimize citizens of Country X, knowing that the authorities in Country Z cannot assist police from Country X in their investigation of these cybercrimes. This is a very simple example, but I hope it makes the point.
In an effort to address this problem, the Council of Europe created a committee and assigned it the task of drafting a cybercrime treaty. After some years of work, the committee produced the Convention on Cybercrime. The Convention is a lengthy document, the goal of which is to harmonize the national penal law (the law governing the definition of criminal offenses) and procedural law (the law governing criminal investigations) that deals with cybercrime. Countries that sign and ratify the Convention (a country must do both to be bound to implement the treaty) pledge to ensure that (i) their law criminalizes a baseline of cybercrime offenses, (ii) their law allows them to assist other parties to the Convention with the investigation of cybercrimes and to extradite cybercriminals in their custody and (iii) their law allows them to provide other mutual assistance to countries in the investigation and prosecution of cybercrime.
I think the Convention on Cybercrime is a very impressive document. And it seems the logical solution to the problems I noted above.
Why then, I wonder, has it been ratified by so few countries? The Convention as opened for signature on November 23, 2001. As I write this, approximately four and a half years later, it has been signed by 42 countries but only ratified by 13. The Convention does not become binding on a country until it signed and ratifies it.
Until this year, the Convention had not been ratified by any of the major European countries. I t had been ratified by smaller countries, such as Albania and Croatia, but not by the major players in Europe, the countries one would expect to have been among the first to ratify the Convention. France and Denmark finally ratified the Convention this year, but the Italy, Spain, Belgium the United Kingdom and a number of other countries still have not ratified it.
The Convention is open to non-European countries under certain conditions, one being that they were involved in its drafting. Four non-European countries -- the United States, Canada, Japan and South Africa -- signed the Convention under this condition. None of them have ratified it.
This is particularly surprising with regard to the United States, because the U.S. Department of Justice was a prime mover in the creation and drafting of the Convention on Cybercrime. The US is a major target of cybercriminals, and therefore has good reason to want global cybercrime law to become a seamless web that facilitates the investigation and prosecution of cyber-perpetrators. Indeed, the U.S. Department of Justice has for years conducted programs for countries in Asia and South America; the programs are intended to encourage them to sign and ratify the Convention by explaining the benefits of doing so and providing assistance with the legal issues involved in adopting the legislation required to implement the Convention.
So, why is the Convention languishing? I don't know. I don't know why we have not ratified it, given the effort we put into its creation. The President recommended ratification to the Senate almost two years ago, and the Senate Foreign Relations Committee recommended ratification last summer. I can only assume our failure to ratify is due, in part, to the fact that the White House is and has for some time been occupied with other matters (Iraq, Al Qaeda, Katrina, etc.). I suspect it is also due to the fact that several entities -- including the ACLU, the EFF and EPIC -- oppose ratification, on the grounds that certain provisions of the Convention are inconsistent with the civil liberties guaranteed by our Constitution.
I also wonder if the general dereliction of duty with regard to the Convention is due to the same phenomenon that happens to most of us at some point in time . . . you have to fix something around the house, fixing it will be a pain, you don't really want to do it but you go out and buy the materials you need to do the job. Then they sit . . . because you really don't want to deal with the problem . . . and you have, after all, taken the first step by picking up the materials you need.
Maybe the Convention on Cybercrime is languishing because those who care about the issues it addresses worked very hard to get the Convention drafted . . . and are now assuming it will go into effect, somewhen, and take care of the problem.
(Image courtesty of the Council of Europe.)
Excuses . . .
Despite my best intentions (when I started this blog I swore I'd post, if not every day, at least 4 or 5 times a week), I've not posted anything for several weeks.
That is due to a combination of circumstances: business travel plus I came down with the flu and bronchitis (plus I sprained my thumb when my little-more-than-a-puppy pulled me into a tree chasing a squirrel).
So, I'm back, and I swear to due better . . . and to watch the dog much more carefully when we're in squirrel world.
That is due to a combination of circumstances: business travel plus I came down with the flu and bronchitis (plus I sprained my thumb when my little-more-than-a-puppy pulled me into a tree chasing a squirrel).
So, I'm back, and I swear to due better . . . and to watch the dog much more carefully when we're in squirrel world.