This is, I hope, going to be a relative short but provocative post.
Elsewhere, I have analyzed the necessity and viability of holding the “users” of technology --- you and me – criminally liable for not preventing cybercrime, at least under certain conditions and subject to certain constraints.
I am not going to go into detail on what I have written elsewhere; if you want a longer version, you can find it here and here.
As I explain in those and other articles, our current model of law enforcement (police react to a completed crime, investigate, identify and apprehend the perpetrator, who is then prosecuted, convicted and sanctioned . . . which takes him/her out of commission and deters others from following his/her example) is not very effective for cybercrime.
It is not particularly effective for cybercrime because the model assumes territorial crime, that is, it assume that the victim(s) and perpetrator(s) are in some physical proximity when the crime is committed. This, in turn, means that:
- they’re in the same jurisdiction, the same country, so the country’s laws clearly apply and the country clearly has jurisdiction to prosecute;
- physical proximity means there is trace evidence at the crime scene (think CSI) and that individuals located in the area of the crime are likely to have seen things that can help identify the perpetrator;
- the perpetrator may even be known, locally, which helps with identification;
- once identified, the perpetrator can be apprehended with relative ease.
- be anonymous or pseudonymous;
- commit crimes across national borders (maybe across several national borders); and
- commit crimes on a much larger scale (real-world crime tends to be sequential, cybercrime tends to be simultaneous and cumulative).
So, I argue, we need to move to a model that ALSO emphasizes prevention . . . which is where we come in. Currently there is no legal obligation to secure systems and otherwise frustrate cybercriminals. Currently, criminal law does not take the negligence or recklessness of the victim into account – if I leave my keys in my car and it’s stolen, that’s still a crime. There is no consequence of assumed risk. My negligence in leaving the keys there and creating an opportunity for a car thief has no consequences in criminal law because a crime is not “against” me, it’s “against” the state . . . it’s not personal, it’s a matter of social control.
In the articles I noted above, I argue that we should change this in two basic ways: One deals with crimes in which the person who didn’t secure their system is the only victim; the other deals with crimes in which the perpetrator used computers their owners (A and B, say) had not secured to attack others (C and D, say, to keep it simple).
I argue that we should use a form of assumed risk for the first scenario, the one in which the owner of the system is the only victim. What this could mean (it could be structured in various ways) is that law enforcement would have no obligation to investigate the crime and try to apprehend the perpetrator; they could if they wanted to (because crime is an offense against the state), but they would be free to ignore it if they concluded that the injury was only to the person who, in a sense, allowed it to be inflicted.
We could use a modified version of accomplice liability to address the second scenario – the consequent victimization scenario. Here, A’s and B’s respective negligence resulted in the infliction of “harm” on C and D, who, we are assuming, did nothing wrong. Since A and B contributed to the commission of the crimes against C and D, then could be held criminally liable for facilitating those crimes. It would probably be a low level of criminal liability and a low penalty (maybe only a fine, maybe community service).
The goal in both instances is to change behavior, to bring home to people that there are consequences of not securing their systems. I think the current state of complacence with regard to securing (or not securing) computers is a function of our implicitly assuming that crime is the sole province of the police. (It may also be in part attributable to the fact that people don't see cybercrime as "real" crime . . . not as the kind of crime that warrants alarm systems and burglar bars.)
It did not used to be that way; crime control used to be partially, and even primarily, a civilian, community function. The police-only model of crime control has only been the dominant model for about a hundred and fifty years, since Sir Robert Peel invented the professionalized police force in nineteenth century London.
Maybe it’s time we realized that technology is changing our world and that we can't rely only on old assumptions.
Or maybe I’m completely off base.
1 comment:
Wow. And I thought I was the only one. ;-)
But seriously, it is a provocative post. As a forensic engineer, I see (or have the possibility of seeing) this all the time.
I'm not a lawyer, but I don't feel that this is something that will work for home users...definitely not. Home users will end up installing so much anti-* software on their systems that they won't be able to run Solitaire, let alone do whatever it is they do.
I do believe, however, that corporate environments are different. Let first say that I'm dismayed that, having been both an infosec consultant and worked in full time positions doing security, there are so many corporate environments that do so little with regards to security up front, and then quibble when they are *legislated* to perform what should have been common sense, or just good customer service.
Corporate e-commerce infrastructures amaze me. Developers will be hired to provide an incredible customer experience, but no where along the way will someone with a security viewpoint be brought in. Sites will use Flash and all manner of interesting graphics and design to entice customers to purchase products, and to make that process easier...but how secure is the information the customer is sending? What about that privacy notice at the bottom of the page, where the corporation tells the customer that their personal information will not be shared...but someone breaks into the site and is able to access that information?
I'm used to seeing it...as a young 2ndLt in the Marines I was very often held to a standard that my seniors (I hesitate to say "superior officer" in some cases) did not adhere to themselves. I see the same thing in corporate arenas...senior managers will hold low-level techs to a standard, but will those same senior managers require that a new project have security personnel on the development team, or that a current project be subject to a security review?
Here's an example to consider...identity theft monitoring products. In the face of security breaches in the last year or so where personal infomrmation was stolen and possibly accessed, these services are becoming more and more important. When choosing which one is best for you, don't look at the price...instead, ask if the CEO (and/or senior managers) use the service. There are such services available where the CEOs themselves do not trust the security of their own systems/products.
So...who's liable? Who should be held responsible? Should the attacker be responsible for his actions? Yes. Should the CIO or CISO for the corporation that got hacked be responsible b/c they left a hole in the firewall, or an unpatched system, or failed to be aware of that rogue system on their network? At some point, a senior manager sat down and said, "we can't afford to hire a security person/staff", or they decided "we have to take the head count from security and assign them to another function"...that decision should have consequences, particularly if your sensitive personal information is exposed as a result (either directly or indirectly).
H. Carvey
http://windowsir.blogspot.com
Post a Comment