I’ve done a couple of posts on the “insider” issue: the problem of defining when someone who is authorized to access a computer system exceeds the permissible bounds of that access and therefore becomes subject to criminal liability.
As I explained in a post I did earlier this year, the problem arises because the crime these “insiders” are prosecuted for is called “exceeding authorized access.”
The problem, as I explained in that and other posts, comes in defining how the “insider” knew that he or she was exceeding the scope of their authorized access. It's a basic premise of criminal law that you can’t be prosecuted for a crime unless you intended to commit the crime (I purposely exceed my authorized access to my employer’s computer system) or at least knew you were committing the crime (I know I’m exceeding my authorized access to my employer’s computer system but I’m going to do it anyway). In other words, you must have been put on notice as to what is permitted and what is not when it comes to using that computer system.
The problem criminal law has had with this crime is one of line-drawing. You have a trusted employee who’s authorized to use the computer system for certain purposes, like an IRS customer service representative who’s authorized to use the system to look up information (tax return filings, refunds, etc.) in order to answer questions from the taxpayers who contact the office. Assume the IRS agent uses the system to look up friends, his fiance’s father and a number of other people; that use is, as a matter of common sense, completely out of bounds. The IRS agent is, in effect, off on a virtual frolic and detour. “Frolic and detour” is a term the law uses to refer to the situation in which an employee briefly abandons carrying out his employer’s business to run an errand or do something else personal; a delivery driver who makes a detour to visit his girlfriend would be an example of frolic and detour.
So in my hypothetical, we all know as a matter of common sense that the IRS agent went on a virtual frolic and detour and, in so doing, exceeded the bounds of his authorized access to the IRS system. But common sense won’t work for the law; the law has to be able to draw a reasonably clear line. So the law has to be able to define what “exceeded authorized access” means with enough precision to put people on notice as to what they can, and cannot, do.
The problem, as I’ve noted before, is that it can be really difficult to do that in practice. I did a post earlier this year about a corporate Vice President who used his employer’s computer system to collect information the VP could use when he went out on his own. As I noted in my post, the court held that the VP did not exceed authorized access to the system because he was allowed to use it to look up the information at issue.
Some, as I may have noted, think the solution to the problem of defining the crime of exceeding authorized access lies in code; they say employers should simply use code to lock people into permissible use zones. If you’re somehow able to get around the limits on your permissible use zones, the efforts you made to do so would inferentially establish your intent, i.e., you knew you were exceeding authorized access and intended to do just that. (And if you weren’t able to get around the limits, there’d be no exceeding authorized access, which I think is the real point.)
The other theory is the contract theory, which I’ve written about before. It’s the one I was referring to above, when I talked about employer policies that tell you what you can and cannot do. The problem with that – as I wrote in a post earlier this year – is that it can be very difficult to come up with workable policies that do this.
I did a post earlier this year suggesting an alternative approach: making it a crime to misuse authorized access instead of exceeding authorized access. I still like that idea but I have a student who’s doing an independent study on this general issue, and as we were discussing the problem last week, I came up with another approach. I’m going to outline that approach and I’d be interested in any comments you might have on it.
We were talking about the central problem in defining the crime of exceeding authorized access: drawing a clear line between what is and is not permissible. My problem with that approach is that it essentially relies on prescriptive rules.
Law uses two kinds of rules: prescriptive rules (do this) and proscriptive rules (don’t do that). Prescriptive rules tend to be civil in nature; we have lots of regulatory rules that are prescriptive rules. Proscriptive rules tend to be criminal in nature; the structure of a statute that defines a crime is prohibiting certain behavior and/or certain results, such as causing the death of a human being. The distinction between the two types of rules isn’t perfect; categories sometimes blur in law, for various reasons (such as the facts that it’s concerned with practical matters and legislators sometimes are not masters of statutory construction). But it exists, and it’s a good conceptual model for thinking about the exceeding authorized access problem.
I see the contract-line-drawing approach to the problem as relying on prescriptive rules. In this approach, it’s basically up to the employer to develop rules that prescribe what the employee can do and stay within the scope of his or her authorized access to the employer’s computer system. This approach, in other words, puts the risk of error on the employer; if the employer doesn’t get the policy exactly right, it leaves some play, some room, for employees to exploit the computer system in greater or lesser ways for their own purposes. I’m not, of course, saying it’s impossible to develop policies that can define the scope of authorized access with some precision; I’m simply saying I think it can be very difficult, especially with regard to certain types of employment.
That brings me to the alternative approach I came up with when my student and I were discussing this last week. The alternative approach is to put the risk on the employee, not the employer. How could we do that and how would it help solve the problem?
The way we could do that is to make exceeding authorized access a crime – just as we currently do – but alter the way we define the crime. As I noted above and as I’ve noted in other posts, the problem we’re having with defining the crime of exceeding authorized access is the issue of intent. If we can’t draw precise lines between what is and what is not forbidden, then the law did not clearly forbid at least certain types of conduct, which means the person who engages in that conduct cannot be prosecuted because we can’t show that they intended to and/or knew they were exceeding authorized access.
We could address that by making exceeding authorized access a strict liability crime. As Wikipedia explains, strict liability crimes do not require the prosecution to prove intent; all the prosecution has to prove is that the person engaged in the prohibited conduct (i.e., exceeded authorized access). Strict liability crimes put the risk on the person because if they do what’s forbidden, they have no excuse; they can’t say, “I didn’t mean to” or “I didn’t know.” That may seem harsh, and it can be. To mitigate the harshness of holding people criminally liable without requiring intent, the law uses a compromise: We can eliminate intent in a criminal statute but, in exchange, the penalties have to be small, usually just a fine.
Strict liability crimes evolved about a hundred years ago as a way to enforce rules that were being adopted to encourage businesses and others to follow certain standards. There’s a case, for example, in which the CEO of a grocery company was convicted of a strict liability crime after his company let food stored in a warehouse be contaminated by insects and other vermin. The CEO appealed his conviction to the U.S. Supreme Court, arguing that he shouldn’t be held liable because he didn’t know what was happening in the warehouse. (It was a big company with lots of warehouses.)
The Supreme Court upheld the conviction because the crime he was convicted of was what’s called a regulatory offense. Regulatory offenses don’t have individual victims; they’re intended to encourage people to abide by the law in ways that contribute to the greater social good (like ensuring that food isn’t contaminated). As the Supreme Court noted, he best way to go about doing that is to use strict liability; strict liability puts the risk of error on the person who’s responsible for seeing that a rule – a rule the purpose of which is to promote the greater social good – is enforced. If the rule is not enforced, then the person who’s responsible has no excuse; good intentions or a lack of good intentions isn’t relevant. All that matters is the result.
So as my student and I were talking about all this, I came up with the idea of creating an exceeding authorized access crime that’s a strict liability crime. How would we do that? Well, I’m not exactly sure. If we decided this was a good way to go, we’d have to figure out how to structure the crime. I suspect – though I’m not sure (and can be wrong) – that we could come up with a good general definition of what it means to exceed one’s authorized access to a system. We might phrase in terms of using the system only in a fashion appropriate for carrying out your assigned tasks, say, or something similar.
Or maybe it’s a stupid idea. Maybe it wouldn’t do anything to help achieve clarity in this area. I still like my misusing authorized access alternative. What I found (find) intriguing about this notion is the idea of putting the risk on the person who is in the position to exceed authorized access. I really don’t think the prescriptive rules (putting the risk on the employer) is a particularly viable option . . . but, again, I could be way off base.
Subscribe to:
Post Comments (Atom)
4 comments:
what if the user was an administrator. why couldnt the employer be held responsible based on the authority of their logon credentials.
Are you saying that we could use vicarious liability (I'm liable for what someone else did because of a relationship I have with them, such as employer) to hold the employer liable for an employee's exceeding authorized access?
If so, that could be a very good approach . . . analogous to what law does with liquor sales. Laws often hold the employer liable if the employee sells liquor to a minor, regardless of whether the employer knew or had any specific involvement in the sale.
i guess i meant that if someone is an administrator and has that access, then they have full access unfettered access to the system. if the person is a user, then it should only allow that person to access what they are allowed to see. if they exceeded that logon authority, it would be obvious. so the employer would have policies for users and policy i assume if warranted for administrators. i think the logon credentials with correct logging setup could eliminate any misunderstanding about intent. just MHO.
I think you're right about the fact that logon credentials could clarify the issue of intent (or lack of intent) . . . but is it always possible to limit what users are allowed to see?
Post a Comment