Monday, April 14, 2008

[Xanga] A Post-Utility Model of Decisionmaking

Proponents of rational choice theory often rate preferences according to their respective utility functions: that is to say, for actions or choice options A and B, and utility function f(x) giving the amount of satisfaction derived from or the desirability of x, the ordinal relationship between f(A) and f(B) gives the selecting agent grounds for a rational or reasonable selection: f(A) > f(B) leads to reasonably choosing A, while f(B) > f(A) leads to reasonably choosing B. Situations in which f(A) = f(B) lead to indifference: a choice cannot be made under the rational choice model, because there are no grounds, under utility, for choice.

I propose a post-utility model - what I call the responsive model - for decisionmaking, based on explicitly theological grounds. My reasoning is as such: Imagine the case where utility has already been maximized, where the optimal (or even infinite) utility of the selecting agent has been guaranteed. In such a case, under utility-maximization versions of rational choice theory, the agent exists in a state of indifference. However, I argue that this indifference is only relative to himself, and leaves room for adopting external aims as a preference set for action, not based on individual utility, but rather based on the desires of the other.

How does this differ from a model wherein the utility of the other is factored into the individual's calculations? In order that I might answer this objection (basically, that my proposed model is merely a rephrasing of utility maximization), I divide the potential cases of the other into two: where the other has not secured his, her, or its own maximal (or even infinite) utility, and where the other has done so.

In the case where the other has not secured his, her, or its own utility, then my actions, under the responsive model, are identical to that of an agent under rational choice utility maximization who has construed the ends of the other as useful to himself. However, my motivation differs greatly: I remind you, my utility has already been assured to be maximized and, therefore, no actions which I take can affect, positively or negatively, my future utility. This precludes a discussion of the ends of the other as means to the agent's obtaining greater utility.

What, then, motivates the adoption of the other's ends as the individual's? I beg leave from this question for the moment, instead turning to explicate the second case: that where the other has also secured his, her, or its own maximal (potentially infinite) utility.

In the case where the utility of both the self and other is not at stake, then what informs our actions with that other? If utility maximization theory is to be believed, then, in a situation where two individuals have no possible gain or loss of utility, then, as there are no actions to be taken that involve securing utility or protecting against the loss of it, there are no options other than indifference and inaction.

But this seems inherently mistaken. The guarantee of utility optimization regardless of action should result in freedom to act, not inability to do so. But, if the only rational choices that may be made are towards the end of optimizing utility, then, once that aim is reached, there is no further progress to be made. Post-utility-optimization decisionmaking under this schema seems to be at-best arbitrary or, at worst, impossible.

What, then, can possibly motivate post-utility-optimization action towards the other? Such motivation must have certain characteristics: for one, it cannot contribute to the utility of the self or the other, but must, in some way, be desirable. How is this possible?
[more to come]

No comments: