When AI Says ‘We’: On Linguistic Authority and Educational Guidance

A laptop displays "what can I help with?"
Photo by Aerps.com on Unsplash

Asking Chatgpt to do a minor grammar check for me, I was struck by the response it generated while correcting my mistake: “In English, we say it this way.” What felt uncanny in the otherwise typically sleek GenAI phrasing was the “we” pronoun. “We” clearly signals that the utterance belongs to a member of linguistic community – a native or at least embodied speaker of English. But how can it possibly refer to a polyglottic machine with installed dictionary? What If I ask something in Italian or Swedish – will the answer shift its implied linguistic 'citizenship'? And what if I make a mistake in my own language – will GenAI counterpoise its “we” against my “you,” implicitly correcting me from inside my mother tongue?

When the machine says “in English, we…,” it as if presents itself as a human interlocutor who just happens to be exceptionally fluent in dozens of languages. The fluency itself is not uncontested: GenAI glitches, making grammar or style mistakes. It tends to ostensibly correct and revise its own generated texts. Finally, its vocabulary is impressive yet outputs tend to prioritize certain expressions or even words so that they become recognized – or flagged – as “AI-ish.” I know of a professor who rejects students’ works the moment he encounters “delving into” expression in the paper, and this is not a joke. Since we can already recognize stylistic properties of AI‑generated text – and often the gut feeling is more reliable than detectors – we may safely assume its linguistic capacities are limited. I am not a linguist, but I would imagine that experience plays a crucial role in human language development, and whatever GenAI has, it is not experience.

One might argue that the “we” here is simply conversational simulation rather than an attempted identity claim or usurpation. After all, when users thank GenAI or argue with it, they also participate in the same simulation. But in this instance, “we” feels both identifying and authoritative. The model steps into the guiding role normally held by a teacher, a custodian of linguistic norms. This is no longer about generating information or providing with an explanation: it is an authoritative gesture redistributing roles between the knower (the bearer of language) and the learner (the one corrected).

One of the emerging problems in higher education is that students increasingly treat AI as an unquestioned guide, structurally replacing the teacher or expert with a system that simulates guidance. This is partly due to our inherited expectations of learning itself. Ever since the Socratic dialogues and even more so in modernity, education has been perceived as a process of being guided and oriented, opening one’s horizons. But what does it mean when one is guided by algorithms and entrusts oneself in the artificial hands of GenAI?

I am trying to answer this question in the article I am currently co-writing with the members of our group Juliene Madureira Ferreira, Johanna Annala, and Vesna Holubek. In this article in progress, we interrogate the notion of educational knowledge in the face of new technological challenges. We hope our inquiry will help us shed a new light on the role of knowledge in an AI‑mediated educational landscape, at a time when “knowledge” can seemingly be generated in just a matter of seconds and becomes instantly available. Updates to follow.

Alexandra Urakova