Reading Notes

2 June 2020

A collection of articles or concepts I’ve come across and found interesting, June 2020 edition.

Do Artificial Reinforcement-Learning Agents Matter Morally?

Main argument:

  1. The sentience (and hence moral importance) of a mind is not binary but comes in degrees depending on the number and complexity of certain well-being-relevant cognitive operations the mind runs.

  2. Present-day artificial RL algorithms capture, in a simplified fashion, important animal cognitive operations.

  3. These cognitive operations are not tangential but are quite relevant to an agent’s well-being.

  4. Therefore, present-day RL agents deserve a very small but non-zero degree of ethical consideration.

Sub-conscious systems

One objection to this perspective of seeing rudimentary levels of consciousness in simple systems is to point out that our own brains contain many subsystems that are arguably at least as complex as present-day RL agents, and yet we don’t perceive them as being conscious. My reply is that those subsystems may indeed be conscious to themselves.

This is close to the systems Chinese room explanation, where the room is conscious as a system, and the man inside the room is also fully conscious, even though he’s merely a subpart of the system.

What qualifies as pain for an RL agent?

Another question posed is the following: an RL agent acts to optimize a sum of rewards $\max \sum_i r_i$, where $r_i$ is the reward at time step $i$. Some $r_i$ are positive, and some are negative. At first sight, it might seem like the agent has negative welfare when the $r_i$ values are below some threshold. But we could clearly just add a large constant to all of the $r_i$ in order to make them all as large as desired even in the worst case, without changing the optimization problem. It seems like this would mean the agent always has a very large welfare, which doesn’t make much sense.

But if we reject the argument that the agent is unhappy when the reward is below some threshold, what is the correct determination? Is the hedonic treadmill a general property of entities with welfare? This would solve the problem, but have various sort of consequences that don’t seem desirable.

Utilitarianism and its discontents, on Thinking Complete

Lots of good points, one of which is to claim that the three major moral schools should be used in different circumstances:

  • Virtue ethics should govern behavior for close personal relationships

  • Deontology should govern the relationship between a person and society at large

  • Utilitarianism should be used for policy decisions

Another interesting question was

The question of whether preferences without emotions are possible is another interesting point which was brought to my attention quite recently. If we imagine someone who acted towards certain ends, but felt no frustration when they failed, and no happiness when they succeeded, then we may well question whether they truly have preferences, or whether those preferences should be given any moral weight.

This seems related to the earlier question of whether RL agents have moral weight and whether it’s possible to cheese the reward to make the agent perpetually blissful.

The Optimal Taxation of Height: a Case Study of Utilitarian Income Distribution

In the vein of the previous article, this paper starts by explaining that the most commonly used framework for taxation is a utilitarian one that tries to tax people on their abilities, but not on the effort they make. It concludes that tall people should be taxed significantly more.

Each person’s income is modeled as the product $y=wl$ of wage $w$ and effort $l$. Each person’s utility function is modeled as a function $u(c,l)$ of consumption $c$ and $l$, increasing in $c$ and decreasing in $l$, where $c = y - r$, $r$ being the net taxes (positive or negative) for that person.

If the government’s goal is to maximize the sum of all tax subjects’ utility, then under some assumptions for the shape of $u$, notably that each extra unit of labor effort becomes more painful as the effort already expended grows, the tax rate should be higher for people with higher wage $w$ (see the papers for details). This makes intuitive sense, because given an equal amount of income, people with higher wages will have expended less labor, so be less unhappy about having to work more, and so be less sensitive to being taxed.

It is assumed that $u$ is known to the tax planner. $y$ is also known to the government for each person, but its breakdown in terms of $w$ and $l$ is not.

The authors then propose to segregate the population by height groups (but any other characteristic also works), and determine what each group’s hourly wage is (more feasible than determining individual wage), and, since tall people have a higher hourly wage on average, it follows that they should be taxed more.

This model could have real-life implications if we pick a different way to partition society. My main concern it model is that it assumes that the welfare function $u$ is equal for everybody, but it seems like that might not be the case. Maybe people with a higher wage are also more sensitive to having to work more? Or, said differently, it’s not clear that the effort is tied entirely to the time worked $l$, and not to the wage $w$.

Bullshit Jobs, David Graeber

Main argument:

Why did Keynes’ promised utopia [of working 15 hours a week] never materialise? The standard line today is that (…) given the choice between less hours and more toys and pleasures, we’ve collectively chosen the latter. (…) Even a moment’s reflection shows it can’t really be true. Yes, we have witnessed the creation of an endless variety of new jobs and industries since the ’20s, but very few have anything to do with the production and distribution of sushi, iPhones, or fancy sneakers.

What did we do instead?

Rather than allowing a massive reduction of working hours to free the world’s population to pursue their own projects, pleasures, visions, and ideas, we have seen the ballooning of not even so much of the ‘service’ sector as of the administrative sector, up to and including the creation of whole new industries like financial services or telemarketing, or the unprecedented expansion of sectors like corporate law, academic and health administration, human resources, and public relations. (…)

These are what I propose to call ‘bullshit jobs’.

Why did this happen?

The answer clearly isn’t economic: it’s moral and political. The ruling class has figured out that a happy and productive population with free time on their hands is a mortal danger (think of what started to happen when this even began to be approximated in the ’60s). And, on the other hand, the feeling that work is a moral value in itself, and that anyone not willing to submit themselves to some kind of intense work discipline for most of their waking hours deserves nothing, is extraordinarily convenient for them.

I don’t think this last part makes much sense. The author admits that the market should naturally remove all useless jobs, and to me it seems like the government doesn’t have the power to keep them in existence, at least in the scale he claims exists—practically the only way it could do this is through regulation, and while it’s true that there’s lots of government-enforced red tape that keeps some useless jobs alive, certainly it can’t sustain 39% of the population working on useless things.

I’d say there’s three main reasons for BS jobs:

  • actual failure modes of the market:

    • there are some “zero sum” fields, e.g. corporate law: you need to have lots of corporate lawyers to defend yourself against other companies with lots of corporate lawyers, and in some sense it’s true that if all corporate lawyers disappeared at the same time, we’d be in a more optimal situation; this is much like the nuclear weapons problem.

    • there are jobs which exploit failure modes of the system, e.g. offshore tax specialists or lobbyists.

  • excessive red tape

  • and I think the biggest explanation is that there’s a decent amount of jobs that look useless, even to those who do them, only because the system has become very complex. This may be a problem in itself, but it doesn’t seem to be what David Graeber is pointing at.

The author’s other point about critically important jobs being underpaid is also very straightforwardly explained by being in a market economy trying to allocate talent efficiently, more than by it being a major conspiracy.

In total I feel like I could agree if the point was that the neoliberal push that started in the 80s is the root of all evil and that markets have too many failure modes to be useful, but it seems hard to defend that it’s an economically detrimental political maneuver.

Answer to Job, but the SSC one

This is a theodicy that argues that once God has created a perfectly good universe, it can’t increase total happiness by creating a second copy of it, because in the absence of any characteristics to distinguish the two universes, they would be exactly the same, and the second copy would not contribute any more happiness.

One immediate counter-argument is that God could probably “tag” the universes in one way or another, to make them actually distinct. For instance, assuming we are talking about standard 3-dimensional universes, an omnipotent God could create an infinite number of them along a fourth dimension, and each universe’s distinct $w$ coordinate would ensure its uniqueness.

But this post raises interesting questions about identity. As e.g. Max Tegmark notes, if the universe is infinite and time and space are discrete, there must be infinitely many observable universe bubbles that are exactly like ours. Even though we have different spatial coordinates, am I materially different from an exact clone of myself living in another such bubble? For instance, if one of the two bubbles is destroyed by a vacuum decay event, but the other is intact, has anybody died in a meaningful sense?

Another question this poses is the limit of equality: if two indistinguishable entities do not contribute more than a single one of them in terms of moral weight, do two almost-indistinguishable entities both count? Is there a discount factor the more similar they are? It seems hard to accept a true discontinuity of moral value.

Integrated Information Theory, by Scott Aaronson

Christof Koch claims that his Integrated Information Theory (IIT) implies that computers simulating an AI can’t be conscious. The basic premise is that

any physical system that has causal power onto itself is conscious. What do I mean by causal power? The firing of neurons in the brain that causes other neurons to fire a bit later is one example, but you can also think of a network of transistors on a computer chip: its momentary state is influenced by its immediate past state and it will, in turn, influence its future state.

Scott Aaronson summarizes this, perhaps more clearly, as:

(1) to propose a quantitative measure, called Φ, of the amount of “integrated information” in a physical system (i.e. information that can’t be localized in the system’s individual parts), and then

(2) to hypothesize that a physical system is “conscious” if and only if it has a large value of Φ—and indeed, that a system is more conscious the larger its Φ value.

But much like moral theories, we don’t have much ground on which to evaluate or theories of what is conscious. The best we can do seems to be to see how well they match our intutions for case that we are reasonably certain of: Φ must be very large for systems that are certainly conscious (like humans), and very low for systems that are certainly not (like rocks).

Scott then constructs an appropriate matrix and vector pair, and shows that the process that multiplies them has an immense value of Φ, so is predicted to be immensely conscious by IIT, which makes little sense.

Origins of Judaism, on Wikipedia

This changed my model of early Christianity quite a bit, especially:

In the 1st century, many Jewish sects existed in competition with each other (…) The sect of Israelite worship that eventually became Rabbinic Judaism and the sect which developed into Early Christianity were but two of these separate Israelite religious traditions. Thus, some scholars have begun to propose a model which envisions a twin birth of Christianity and Rabbinic Judaism, rather than an evolution and separation of Christianity from Rabbinic Judaism. For example, Robert Goldenberg (2002) asserts that it is increasingly accepted among scholars that “at the end of the 1st century CE there were not yet two separate religions called ‘Judaism’ and ‘Christianity’”.

Book Review: The Hungry Brain on Slate Star Codex

In the 1970s, scientists wanted to develop new rat models of obesity. This was harder than it sounded; rats ate only as much as they needed and never got fat. Various groups tried to design various new forms of rat chow with extra fat, extra sugar, et cetera, with only moderate success – sometimes they could get the rats to eat a little too much and gradually become sort of obese, but it was a hard process. Then, almost by accident, someone tried feeding the rats human snack food, and they ballooned up to be as fat as, well, humans.

I don’t understand how some people still claim that the market isn’t humanity’s best invention. Want to make food so good people won’t be able to limit how much they eat? The market has your back. More on Doritos here.

Related: For, Then Against, High-Saturated-Fat Diets; the conclusion that we actually don’t understand if the obesity epidemic is even related to diet was interesting to me.

Wine-dark sea, on Wikipedia

In the vein of the famous bicameralist theory, this much less ambitious, but also controversial, theory asserts that the ancient Greeks and Romans had a different conception of colors than we do today, as evidenced by descriptions of the sea being dark purple.

The Relevance of Anarcho-syndicalism, by Noam Chomsky

I knew next to nothing about anarcho-syndicalism, so even the definition was useful to me:

[The seminal thinkers of anarcho-syndicalism] had in mind a highly organized form of society, but a society that was organized on the basis of organic units, organic communities. And generally, they meant by that the workplace and the neighborhood, and from those two basic units there could derive through federal arrangements a highly integrated kind of social organization which might be national or even international in scope.

Deepity, on Wikipedia

Marginally interesting but this term seems like it applies to a lot of stuff on the internet:

Dennett adopted and somewhat redefined the term “deepity”, originally coined by Miriam Weizenbaum. Dennett used “deepity” for a statement that is apparently profound, but is actually trivial on one level and meaningless on another. Generally, a deepity has two (or more) meanings: one that is true but trivial, and another that sounds profound and would be important if true, but is actually false or meaningless. Examples are “Que sera sera!”, “Beauty is only skin deep!”, “The power of intention can transform your life.”

Comments