
he grave of Karl Marx in London’s Highgate Cemetery reads:
“The philosophers have only interpreted the world, in various ways. The point, however, is to change it.
I’m in agreement with Marx and that’s probably why I quit philosophy.
In a previous post, I laid out how and why the blurring of analog and digital realities confronts us with serious ethical questions related to what we do and who we wish to be. In this post, I would like to take a closer look at what some philosophers have said about what it means to say something is good for someone. Then, I’d like to consider how these accounts relate to the design of recommender systems, which ostensibly serve to recommend “good” items to persons in a process broadly referred to as personalization.
In particular, this post was inspired by a reading of philosopher Richard Kraut’s insightful book, What is Good and Why: The Ethics of Well-Being. There, Kraut gives us three competing versions of the good: the conative, the hedonistic, and the developmentalist accounts of the good.
The Conative Account of the Good
Let’s start with arguably the most intuitive and simple view about the good. The conative (its Latin root, conari, means to try or exert effort) account goes like this:
“If G is good for S, that must be because S pursues it, or desires it, or likes it — or would, under certain conditions, pursue, desire, or like it.”
On this view, we assume that what a person searches for or likes or clicks on must be good for the person, why else would he have spent his time searching for it or clicking on it? By collecting your browsing history, we can pretty well infer what’s good for you. If it weren’t, you wouldn’t have been looking for it. As Kraut points out, the conative account is naively appealing for several reasons. Mostly, it gives the person a “special role” in determining what is good for him.
According to the conative account of the good, if I want to figure out the best ways to fake my own death to collect life insurance using Google search, faking my death is good for me. Google will then look to serve me ads that match this interest (assuming such ads exist). This approach seems to mirror the Internet’s Postmodern “anything goes” culture; it fortunately also makes for a good business strategy when you do not judge your customers’ interests. For instance, Facebook still will not fact-check political ads delivered on its platform, though Google and Twitter have taken some steps to monitor (Google) or outright ban political ads on their platforms (Twitter).
Liberalism and Conation
Westerners are particularly enamored of the conative account. It seems to respect the liberal principle of autonomy we so highly esteem. The idea is that one ought to have a free choice in what one pursues, without external pressure from one’s government, for example. In America this belief is sacrosanct. Case in point, there are currently people protesting against being told to wear face masks during the resurgence of a worldwide pandemic.
Perhaps the most influential proponent of such a view was John Stuart Mill, who argued that a good society was one in which people had the freedom to develop their character, pursue their vision of the good life, and engage in public debate. Mill worried that individuals could not flourish without some protections from the tyranny of the majority and of society. He also believed that one could not be a “person of character” if one’s desires and impulses were not one’s own.
More recently, the political philosopher John Rawls espoused a similar liberal, pluralistic conception of the good in his influential book A Theory of Justice. For Rawls, primary goods are objects of value “that every rational man is presumed to want.” Rawls was deeply impacted by the philosophical tradition of contractualism, dating back at least to Hobbes’ Leviathan. In the end, Rawls leaves it up to individual societies to decide how to distribute these primary goods in a just way, which he argues is achieved by following a very specific kind of distributive procedure (embodied in his veil of ignorance thought experiment) and attending to certain features of its final distribution.
As even Google has recognized, personalized recommendations and the personal data they are based on have the potential to negatively impact our personal identities and therefore may have great implications on a society-wide level. As it stands now, machine learning has no way to take into account one’s vision of the good life and the associated moral convictions while making so-called personalized predictions.
In short, the conative account is appealing because it flatters us as unique individuals in the same way the old Apple slogan to “Think Different” did. The irony of course is that by deciding to think differently and buy an Apple product, we in fact think just like everyone else who buys the same mass-produced computer.
The Hedonist Account of the Good
Let’s move on to the second major account of the good. Roughly speaking, hedonism is the view that:
“Whenever it is good for S that some state of affairs occurs, that occurrence is good for S simply and only by virtue of his being pleased; and whenever S is pleased, that is good for S simply and only by virtue of his being pleased.”
In other words, if buying a new pair of shoes recommended to me in a Facebook ad makes me feel pleased, then buying the shoes is good for me.
Again, John Stuart Mill’s distinction between the quality and quantity of pleasure might be useful to look at. Mill says that when deciding between two activities that will both give us some pleasure — and are therefore good for us — we ought to choose the activity giving us the greater quality of pleasure, not simply the greatest quantity.
I am not quite sure how one can determine the quality of pleasure without reference to some external “objective” list of pleasures. Socrates famously stated, “It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied.” Similarly, the quantity of pleasure seems nearly impossible to measure reliably or accurately across different persons, as Amartya Sen and others have argued.
Another interesting critique of such a view is given by the philosopher Judith Jarvis Thomson in her 2003 book Goodness and Advice. She argues that equating the good with what brings us the most pleasure or happen is so general as to be meaningless. According to her, “Good is always good in some way, in some context, for some beings.”
Thomson’s account is reminiscent of Aristotle’s teleological, or functional account of the good. Teleology (‘end’ or ‘purpose’) presupposes things have a specific goal or function by their nature. A knife is good for cutting vegetables, but bad for massage. We cannot give simple, blanket statements about what is good, Thomson would argue. Goodness is highly contextual and complex and will depend on certain facts about your identity and environment. While it may be good for you in the context of an app to recommend this particular product, that same product might be bad for you in another outside of the app.
The Developmental Account of the Good
Let’s move on to our final account of the good. This account relies heavily on the Aristotelian notion of human flourishing. According to Kraut, flourishing (you know its cognate term “flower”) means:
“Developing properly and fully, that is, by growing, maturing, making full use of the potentialities, capacities, and faculties that (under favorable conditions) [one] naturally [has] at an early stage of [one’s] existence.”
The developmental theory assumes that human lives follow, at least in general form, a basic developmental structure and that it is good for one to allow the unfolding of this structure over one’s life.
On the developmental view, what is good for someone is anything which promotes the flourishing of the person’s potentialities, capacities and faculties; similarly, what is bad, is anything that reduces these same things. More specifically, Kraut says that a human life comprises the “maturation and exercise of certain cognitive, social, affective, and physical skills.”
One relevant cognitive skill alluded to above is the ability to make decisions about one’s own life. Kraut writes,
“One of the great goods that a child must be trained to acquire and enjoy is the ability to make up his own mind about areas of his life that are important to him — what job he will have, whom he will marry, with whom he will associate, where he will live, and so on. If major choice points do not exist, if every important aspect of his life is planned for him, his cognitive powers are not challenged, and he is not called upon to exercise his imagination…If I take it upon myself to control everything in your environment, so that you no longer have effective control over anything that you do, then I do you great harm by turning you into a passive creature who has no decision-making powers or skills.”
Now, one might argue that these recommender systems suggest relatively benign and unimportant things such as movies to watch or websites to visit. Yet the trend is moving increasingly in the direction of automated decision making and recommendation. Already, social media and dating apps are influencing whom we affiliate with and have romantic relations with. For more details on this topic, see my other Medium article on religion and recommender systems here.
Agentic Skills Of Autonomy: Becoming a Person
The feminist philosopher Diana Meyers comes to similar conclusions in her book Being Yourself: Essays on Identity, Action and Social Life. She argues that a constellation of three agentic skills are needed by persons in order to exercise autonomy: self-discovery, self-definition, and self-direction. She refers to these three skills collectively as autonomy competency.
Meyers describes how autonomy competency is related to the development of the person over her life:
By exercising autonomy competency, agentic subjects become aware of their actual affects, desires, traits, capabilities, values, and aims, conceive realistic
personal ideals, and endeavor to bring the former into alignment with the latter. Autonomy competency sets in motion a piecemeal, trial-and-error process of self-understanding and self-reconstruction that underwrites a provisional authentic identity. Autonomous actions are those that enact attributes constitutive of one’s authentic identity and actions that prompt development of one’s authentic identity. In my view, then, the concept of autonomy competency provides a philosophical psychology and an epistemology of authentic identity and self-determination.
Meyer’s account aligns well with current discussion in the ethics of recommender systems, which focus on serendipity and filter bubble issues. We may be able, at least to some extent, to operationalize these constructs of self-discovery, self-definition, and self-direction, and use them in generating recommendations. If we can do that, then we truly have personalized recommendations. In this last section, I’ll briefly explore an alternative approach to personalized recommendation centered on this notion of developmentalism.
Humanistic Personalization: You Decide the Good
The ideas of developmentalism go back thousands of years. Crucially, however, they must be periodically revisited, lest we forget them. The famed psychotherapist Carl Rogers pioneered the “person-centered approach” in the early 1960s, which strove to foster “fully-functioning persons” who fulfilled their full potential as persons. More recently, the philosophers Martha Nussbaum and Amartya Sen have championed the capabilities approach to human rights, based on similar ideas. We imagine something similar might provide a strong ethical foundation for personalized recommendation.
Humanistic personalization, as I envision it, would entail two major changes to the current paradigm of behavioral big data-based (BBD) recommender systems. These systems are largely trained on digitally afforded, non-conscious goal directed behaviors (Rob Kitchin and Martin Dodge refer to these data as capta) and may not align with your self-narrative. This conflict introduces a fundamental hermeneutic tension in personalization:
What do these logged behaviors actually mean? And who decides?
In addition, data about you —particularly if it derives from the databases of shadowy data brokers — may not reflect your self-identity. Ideally, humanistic personalization would allow you to select your own account of the good and modify the way in which things are recommended to you, given your own, conscious, reflective understanding of yourself.
Second, humanistic personalization would shift away from the optimization of global cost functions (over all persons in the training set) and “objective” performance metrics, such as precision, recall, or NCDG. Enforcing mathematical formulations of “fairness,” as if we could just turn things into a constrained optimization problem, misses the point. Instead, we need a paradigm shift towards embracing the methods of qualitative social science. The tools of interpretivist social science were designed to tease out important qualitative dimensions of phenomena, such as meaning and narrative, that do not lend themselves well to quantification. Now is the time to use them.
Narrative accuracy is the subjective fit of recommendations to one’s self-narrative.
Put simply, humanistic personalization would aim to optimize narrative accuracy, which is admittedly a much trickier and less well-defined task. But ultimately any “optimization” would be guided by the subjective assessment of the subject of personalization.
In closing, if we really hope to use technology such as machine learning to get closer to the good life, we must understand the essential aspects of what makes a human life worth living and how persons develop over the course of their lives. Philosophers as diverse as Hegel, Mead, Foucault, Axel Honneth, and Judith Butler have deeply explored our innate need for recognition from others. Our self-conceptions are inherently tied to how others conceive of us. Humanistic personalization aims to return the terms of digital discourse back to the persons who ultimately know themselves best.
Until our recommendations begin to optimize for narrative accuracy, what we are doing more closely resembles behavioralized, not personalized, recommendation.
Original post: https://medium.com/swlh/recommender-systems-personalized-predictions-and-the-good-life-7cc6e0935f0f