
iStockPhoto // demaerre
working with for ai
We’ve been told a lot about Artificial Intelligence (AI), which refers to machines capable of human-like abilities and includes tools like ChatGPT. According to Salman Khan’s Brave New Words, AI “will revolutionize education.” Ethan Mollick argues in Co-Intelligence: Living and Working with AI that treating AI as a collaborator is the best way to approach this technology. Other “techno-optimists” assert that AI will not result in net job displacement. But what if techno-optimists are wrong?
This essay is not your standard rant against AI. As someone who teaches and writes about AI in a sociological context, I have mixed opinions about the technology. Yet, there are three areas in which AI should provoke serious concern; these form the backbone of this blog post.First, as Shoshanna Zuboff notes in The Age of Surveillance Capitalism, AI requires a massive amount of human data, privacy be damned. In the United States, which I focus on here, people do not have the same privacy and other safeguards as Europeans do through the General Data Protection Regulation (GDPR) and the European Union (EU) AI Act.
Second, AI is “an extractive industry,” as Kate Crawford argues in Atlas of AI. The industry is built on data “scraped” (i.e., gathered through automated web processes) from Wikipedia, Reddit, and social media. As a result, the data include copyrighted works, including those obtained from piracy websites. Additionally, the industry relies on cheap or free labor.
Finally, the United States’ current AI-friendly political environment and recent advances in AI only magnify these concerns. This results in a situation where many of us are not just “working with AI,” but working for the industry and its subsidiaries.
Focusing on education and the labor market—two areas around which many people plan their lives—let’s examine techno-optimists’ claims, as well as their (and AI users’) potential oversights.
education revolution, devolution, or extraction redux?
Techno-optimists have a few things right about AI’s impact on education. First, student AI use is positively associated with academic achievement, according to recent meta-analyses. Second, one of the most potentially positive, though understudied, AI use cases in education involves helping students with disabilities. Third, although faculty concerns about student AI misuse have merit, these concerns may diminish in importance in an AI-driven society where this technology is viewed as a tool much like the calculator and internet are today, according to José Antonio Bowen and C. Edward Wason’s Teaching with AI.
Still, we’re far from being at the precipice of an AI educational revolution. In one recent study, “Your Brain on ChatGPT,” researchers found that participants who used their “brains only” while writing an essay “exhibited the strongest, widest‐ranging [neural] networks” compared to participants using ChatGPT and the internet. Research also finds that students “cognitively offload” their academic work to AI, meaning they transfer mental tasks to AI. This does not bode well for students’ higher-level cognitive skills, which are not improved with AI use to the same extent as students’ academic achievement, according to the currently most comprehensive meta-analysis of which I’m aware. Global survey results indicating that students themselves are concerned that AI use will diminish their critical-thinking abilities are unsurprising given that generative AI (GenAI) like ChatGPT is explicitly designed to produce new knowledge, which represents the highest stage of learning in Bloom’s revised taxonomy used by educators.
It’s not just learning that may be lost in the process. As the federal government promotes AI in education and schools increasingly integrate this technology, students’ privacy and data ownership fall by the wayside, too. In Teaching with AI, Bowen and Watson caution that data input into AI is used to train models. This practice is most common among free tools (“If it’s free, then you are the product”). Schools with more abundant resources may address some privacy concerns and attempt to adhere to the Family Educational Rights and Privacy Act (FERPA) by purchasing institutional licenses from AI vendors. However, inequities and the little-discussed question of data ownership remain. If “data are the new oil,” as Martin Ford notes in Rule of the Robots, then the AI industry and schools should recognize the value of student data and work to avoid expropriating from them like Wikipedia writers, whose work has been scraped by the AI industry.Students want to see AI integrated into their education, and schools are obliging. Neither students nor schools integrating AI are likely asking, “Who/what is training whom/what?” Educational credentials and the promise of future careers in an AI-driven society may be at the forefront of students’ minds. But it is disingenuous for schools to make career promises given AI’s ability to outperform humans in many areas, from math to science, and potential signs of labor market troubles.
labor market troubles: this time probably is different
Techno-optimists argue that AI will create new jobs, including ones for which we do not have a title (think: “social media manager” before social media). They frequently cite World Economic Forum (WEF) projections or other reports predicting net job growth. They often point to history, claiming that previous technological transformations yielded more jobs than were displaced, with cautionary tales about the Luddites. Techno-optimists often encourage us to collaborate with AI, sometimes invoking the now-familiar saying, “AI Won’t Replace You. A Human Using AI Will.”
There are few direct contemporary indicators to test techno-optimists’ claims about AI’s labor market effects. Unsurprisingly, many companies do not explicitly acknowledge AI as the source of decreased job openings or increased job displacement. Thus, the following may or may not be due to AI: Upticks in the unemployment rates for college graduates and younger workers in the last couple of years, which stand at 5.3% and 7% as of June 2025, respectively; the relative shortage of entry-level jobs; and recent tech layoffs. Economists and others have considered indirect indicators of AI’s labor market effects, such as various occupations’ degrees of exposure to AI. A recent study found that the 22–25-year-old age group in the “most AI-exposed occupations,” like “software developers and customer service representatives,” saw a more than 10% decline in employment between 2021 and 2025, relative to 22–25-year-olds in the least AI-exposed occupations. Circumstantial and indirect as these data points may be, they don’t exactly paint a rosy picture for job seekers and workers.This technological transformation is indeed likely different from past transformations, as Ray Kurzweil, who is otherwise optimistic about AI in The Singularity is Nearer, acknowledges. The heart of the problem may be that many techno-optimists did not foresee just how rapidly AI would advance (there are, however, notable exceptions, including Kurzweil, as well as Richard Susskin and Daniel Susskin, authors of The Future of the Professions). Shortly after ChatGPT was released in late 2022, “prompt engineer” positions— jobs in which people provide input into AI models to achieve certain outputs—were designated as fast-growing occupations by the WEF. Today, one can ask AI tools for the most effective prompts or even minimize prompts by using the “deep research” functions of these tools. In late July 2025, Open AI released ChatGPT Agent, which builds on the capabilities of GenAI and deep research to plan and act autonomously. In industries from customer service to finance, agentic AI will likely further shrink the job market, as I have written elsewhere.
Some techno-optimists might also be surprised by the amount and type of data users have freely given AI, which has fueled advances in AI at the expense of people’s data and work. After all, who needs a prompt engineer once those good prompts become part of the training data and are used for model improvement? As companies continue to implement GenAI, employees using these tools are providing valuable information about their “workflows,” which is precisely the kind of data agentic AI needs to complete workers’ tasks.
an ai tipping point?
The AI industry does not “stand on the shoulders of giants,” as Isaac Newton Google said. Its feet are firmly placed on workers’ and users’ backs. The technology it advances has much potential, but at great societal and personal costs. When users realize these costs, it could be a tipping point for AI.
Jeffrey C. Dixon is in the Department of Sociology and Anthropology at the College of the Holy Cross. His research examines work, neoliberalism, and artificial intelligence (AI).