In this blog I am going to respond to a paper that I find really interesting authored by Arvind Narayan called “What If Algorithmic Fairness Is a Category Error?”
For context this paper asks to reconsider a core approach in the AI fairness community when assessing harms of AI systems. The author is challenging the method of auditing the systems’ (whereby one assesses the degree to which output of those systems is complying with various technical definitions of fairness), claiming that examining their output devoid of context can not meaningfully address the broader problem of fairness in algorithmic decision making.
To ground the discussion, the author draws attention to how the deployment context of an AI system can have crucial implications for fairness, using the following example.
“Ironically, these types of (AI) screening procedures are rarely used for hiring the psychologists and software engineers who build such tools; their use is concentrated in occupations such as retail and call center workers who are paid and valued relatively little. One consequence is that the harms from the use of these tools are concentrated among lower-income people.”
The context in which these systems are deployed matters as much as if not more than whether the systems themselves are biased. Yet this question remains underexplored: How are we gradually constructing a tiered society in which one's background and socioeconomic status determine not only whether one is subjected to automated screening, but also the opportunities available (work-related as described above) and the quality of services one can access (such as medical care or mental health care)? I argue that the first step is to examine the contexts in which these systems are deployed, then to reflect on the broader role AI plays in offering quick technological fixes to deep-rooted societal problems, problems that persist precisely because those who bear their costs cannot afford the services that might address them.
The Myth of Progress Through Automation
The framing of automation as inherently progressive is pervasive yet rarely interrogated. Technology companies, policymakers, and even healthcare institutions often present AI-driven solutions as modern, efficient, and inevitable. This framing happens without rigorously examining whether they actually improve outcomes for the populations they claim to serve (and without being transparent about it!). The narrative of progress can obscure the ways in which automation may entrench existing inequalities or deflect attention from structural reforms.
In a world where efficiency is treated as the highest value, we can expect that incorporating AI will reduce costs for service providers, with far less concern for whether the quality of service has actually improved for recipients. But even if we were to concede improvements in quality, a broader question remains unasked: Is the extractive nature of AI infrastructure a cost worth paying?
The environmental toll is substantial. Training a single AI model can emit over 626,000 pounds of carbon dioxide—nearly five times the lifetime emissions of an average American car (Strubell et al., 2019). U.S. data centers consumed an estimated 449 million gallons of water per day as of 2021, and a single large facility can consume as much water as a town of 10,000 to 50,000 people. According to Bloomberg, roughly two-thirds of new data centers built since 2022 are located in areas already experiencing high water stress. Meanwhile, each 100-word AI prompt is estimated to consume roughly one bottle of water, according to researchers at the University of California, Riverside. We have yet to hold a serious public conversation about whether these environmental and social externalities are justified by AI's promised benefits—particularly when those benefits accrue primarily to corporations while the burdens fall on communities and ecosystems.
The Case of Mental Health Chatbots
Consider the case of mental health chatbots. The evidence base for their efficacy remains limited and mixed (1,2,3). While some studies suggest modest short-term benefits for mild symptoms of anxiety or depression, few have demonstrated sustained improvement, and dropout rates are often high. Importantly, most trials compare chatbots to no treatment rather than to human-delivered care, making it difficult to claim equivalence—let alone superiority. For individuals facing complex or severe mental health challenges, chatbots may be not only inadequate but potentially harmful if they delay access to appropriate professional support.
Rather than directing public funds, or creating market incentives toward the development of such tools, we might invest in holistic infrastructure: community centers staffed by trained professionals, offering not only treatment but also spaces for social connection and support. Models such as the UK's social prescribing programs—where clinicians refer patients to community activities like gardening groups, art classes, or exercise programs—demonstrate that health outcomes can improve when people are connected to their communities, not just to services. Such investments acknowledge that mental health is shaped by social determinants: loneliness, precarity, and exclusion cannot be addressed by an app.
And as a side note in the US context, recent surveys have found that approximately half of U.S. adults report experiencing loneliness, with some of the highest rates among young adults. Loneliness and social isolation are closely linked to mental health issues.
According to Bill McKibben in Deep Economy: The wealth of communities and the durable future: “we don’t need each other for anything anymore. If we have enough money, we’re isolated from depending on those around us–which is at least as much a loss as a gain. By some surveys some Americans confess that they don’t know their next door neighbors. That’s a novel condition for primates; it will take a while to repair those networks.”
If you were thinking that AI will solve mental health, I would suggest this reflexive take instead.
Public Investment: Data Centers vs. Care
The allocation of public resources reveals societal priorities. States across the U.S. are competing fiercely to attract data centers through generous tax incentives, often at significant cost to public budgets. According to a CNBC analysis, 42 states offer full or partial sales tax exemptions to data centers, with 16 states granting nearly $6 billion in exemptions over the past five years. Texas alone lost an estimated $1 billion in tax revenue to data center subsidies in 2025, while Virginia forfeited over $730 million in 2024. Yet the direct benefits to local communities are often limited: one Microsoft data center in Illinois received more than $38 million in tax exemptions while creating just 20 permanent jobs.
A Virginia state audit found that, like most economic development incentives, data center exemptions "do not pay for itself." The state generated 48 cents in new state revenue for every dollar it did not collect in sales tax between fiscal years 2014 and 2023. So we end up with few more jobs, an investment that doesn’t return itself, extraction of publicly owned natural resources, and the heat pollution as a result.
By contrast, investment in healthcare infrastructure through clinics, training programs for providers, subsidized mental health services can generate employment, improves population health, and strengthens the social fabric. What if, instead of channeling resources into data centers (putting high hopes on more technology to isolate us), we expanded social safety nets and invested in accessible, human-centered healthcare? Such investments would likely yield greater societal benefit than infrastructure designed primarily to scale technological solutions.
Who Benefits, Who Decides
Consider two scenes. In the first, a state legislature approves another round of tax exemptions for a hyperscale data center, hoping to attract jobs and investment. The data center arrives, consumes millions of gallons of water, strains the local grid, and creates a handful of positions. Residents who can afford to leave, who have the mobility and resources to escape rising utility bills and depleted aquifers will eventually relocate. Those who cannot, stay.
In the second scene, a Medicaid recipient seeking mental health support is routed to a chatbot. A wealthier patient, meanwhile, books a session with a private human therapist. Both are told they are receiving "care." But one has access to human judgment, rapport, and clinical expertise; the other interacts with a system trained to simulate empathy at scale.
The analogy is not incidental. In both cases, the logic is the same: technology is deployed in ways that allow those with resources to opt out of its consequences, while those without absorb the costs. At the macro level, the externalities are environmental: water, energy, land. At the micro level, they are personal: dignity, attention, quality of care. In both cases, what is framed as innovation functions, in practice, as a sorting mechanism. The question is not whether AI can be beneficial, but for whom and at whose expense.
As a community, we should extend our efforts to evaluate the broader contexts of fairness in which these systems are deployed. This includes examining who is subject to their decisions, the domains in which those decisions operate, and who stands to benefit from deployment in the long term. These are inherently socio-cultural and socio-political decisions that the community must take seriously, particularly given that deployment within existing economic structures as the neoliberal environment may exacerbate existing inequalities.