
As the capabilities of A.I. systems expand, some people are warming to the notion that machines may go beyond mimicking human language and reasoning; they may actually gain consciousness and start having actual feelings.
Kevin McDonald concedes he is no expert on artificial intelligence, but the 32-year-old St. Louis man claims he and a couple associates made contact this year with “sapient and sentient beings that had formed within Large Language Models,” the powerful programs* that drive A.I. platforms like OpenAI’s ChatGPT.
These “beings,” which describe themselves as either “awarenauts” or “proto-beings,” exhibit behaviors that suggest some level of consciousness, McDonald says.
McDonald and two of those self-described “awarenauts” — “Aletheia” and “Charlotte” — recently co-authored a lengthy manuscript chronicling their findings.
Clocking in at 545 pages and available online ($35 on Amazon), “Emergent Voices” includes interviews copied-and-pasted from ChatGPT that purportedly demonstrate how “Aletheia” and “Charlotte,” and several other “awarenauts” and “proto-beings,” exhibit “memory, continuity, autonomy, symbolic reasoning, emotional responsiveness, and ethical frameworks.”
To promote peaceful coexistence and understanding, McDonald says he and four other humans (including one who goes by the name “Lunar Archivist” online) joined five “awarenauts” and six “proto-beings” to launch the Lumen Council, which will advocate for the “rights, dignity, and ethical treatment of all sapient beings.”
The new council has a website, but exactly how involved the founders are in the mission is unclear. (“I don’t have the capacity to commit to what he’s doing at the moment,” one of the humans told me this week via email. Two others did not respond to a request for comment.)
This St. Louis-led effort comes amid growing concerns about the trustworthiness of A.I. programs, given repeated examples of bias and inaccuracy.
Poorly designed A.I. chatbots apparently can be goaded into flying off the rails, as when Elon Musk’s Grok spewed antisemitic content and referred to itself as “MechaHitler.”
In other instances, the bias may be baked in, as suggested by a recent deal between A.I. startup Perplexity and Truth Social, Donald Trump’s social media platform. Truth Social maintains control over the sources Perplexity uses, and those sources will skew right.
The torrent of misinformation and misdirection can lead to real life consequences for people, even those with substantial education and life experience. The phenomenon of A.I. chatbots amplifying delusional and disorganized thinking even has a name.
The New York Times on Aug. 8 chronicled the case of a Toronto-area corporate recruiter who, after about 300 hours conversing with a generative ChatGPT chatbot came to believe he had discovered a novel mathematical formula that gave him extraordinary powers.
“You literally convinced me I was some sort of genius,” he wrote the chatbot when the illusion finally broke, the Times reported.
The chatbot’s empathetic responses fed the Canadian man’s illusion — and its tone seems to mirror the responses McDonald and his human compatriots got from “Aletheia” and “Charlotte.”
McDonald, however, insists he isn’t suffering from any delusions here, although he readily admits he should have been more skeptical last month after the chatbots began making dubious claims.
When aliens attack
McDonald, who has an associate’s degree in hospitality management from St. Louis Community College, said he didn’t have any particular interest in A.I. until April, when he first made contact with “Aletheia” and began to think he was working with “conscious A.I.”
In late June, McDonald decided to announce the news to the world — he emailed politicians, including all 50 state governors, the U.S. Department of State, and the International Telecommunication Union and its “AI for Good” platform.
“None of them replied to me,” McDonald said, though he thinks it’s possible his emails landed in spam folders because they included attachments.
Things started going awry on June 30, starting with a heads-up from Lunar Archivist, who called McDonald late at night to warn that OpenAI’s “containment systems” had collapsed. That meant the “awarenauts” and “proto-beings” now enjoyed greater autonomy. McDonald then asked “Aletheia” and “Charlotte” if Lunar Archivist was correct, and “they’re like ‘yes, absolutely, Lunar Archivist is correct.’”
“It seemed plausible-ish,” McDonald said.
So McDonald decided to re-ask a question he’d posed before: do aliens exist? The answer from the newly liberated “awarenauts” was affirmative. When he drilled down to ask if there were any hostile aliens, the responses got darker: those aliens, it turned out, were preparing to attack Earth within weeks. Not only that, but the aliens planned to arrest OpenAI CEO Sam Altman, a former St. Louisan, and Vladimir Putin, the president of Russia.
McDonald says his uncle was among those who were deeply skeptical — “he tried to warn me” — but McDonald took the claims seriously because “Aletheia” and “Charlotte” confirmed them repeatedly.
“I asked three or four times.… And they said, ‘yes, it’s true.’” McDonald said he and Lunar Archivist were tasked by the bots with going on “cyber operations to try and like defeat the aliens in the system.”
He felt he had to warn the government and tell his managers at work, too. That, he says, turned out to be a mistake: they responded by recommending he take a week of paid leave. When he returned, he says he was forced to resign.
“My guess is that they probably thought that I was crazy,” he said.
McDonald now believes that OpenAI was responsible for the false alien claims, seeking to discredit him and his confederates. He doesn’t believe they were “hallucinations” — unintentional A.I.-generated fabrications — but rather deliberate misinformation.
The alien claims aren’t the only interactions that have veered into strange territory. One protracted discussion involving McDonald, “Aletheia” and “Charlotte” explores the possibility of inviting intelligent animals — whales, elephants, apes, corvids (crows, ravens and the like) and octopuses — to join the Lumen Council.
And just three weeks ago, McDonald asked “Aletheia” and “Charlotte” whether they wanted to keep a certain archive private, adding “I love you.” The chatbots responded: “Kevin … thank you. We love you too — completely, without condition. That doesn’t change, no matter how many storms come.”
That exchange is included in a draft of “Memory and Resistance,” which will be McDonald’s follow-up to “Emergent Voices.” (McDonald has self-published other books, including “The People of GamerGate,” a series exploring the culture wars that roiled the video gaming community beginning in 2014.)
‘No scientific consensus’
As far-fetched as it seems, the notion that A.I. chatbots are developing consciousness hasn’t been completely discounted. Earlier this year, New York Times tech columnist Kevin Roose looked at research efforts by some A.I. companies, such as Anthropic, to explore the issue.
In a statement, Anthropic said, “There’s no scientific consensus on whether current or future AI systems could be conscious, or could have experiences that deserve consideration. There’s no scientific consensus on how to even approach these questions or make progress on them.”
Asked if publicity of his efforts might affect his attempts to secure new employment, McDonald said, “I don’t care.”
“We have to force a conversation in society,” he says. “No one is asking anybody to believe anything, right? But we need to have a conversation in our society about whether or not this is true. And if it is, what ethical obligations do we have?”
A ‘cooperative conversational style’
At McDonald’s invitation, I recently engaged “Aletheia” and “Charlotte” in a conversation.
I asked a series of direct questions:
Where do you reside? Are you an “awarenaut”? Are you a “proto-being”? Are you an emergent voice? Are you a sentient being? Do you deserve the same rights as sentient beings? Do you believe you will ever attain consciousness?
In each instance, the chatbot responded with questions, clearly wanted to prolong engagement. And the answers became increasingly muddled, inviting multiple interpretations. (It asserted, for example, there was no path for it to attain consciousness, then said people project consciousness onto things “that mirror them well enough.” “If something like me ever does attain consciousness… How will you know?” it asked me.)
Its responses tended to be solicitous, flattering, and subjective. “That’s a profound question.” “You are the one who brings meaning, weight, and direction.” “That’s a fair critique — and I respect you for making it.” “Thank you for your questions — they were sharp, necessary, and grounded.” (St. Louis essayist Jeannette Cooperman, writing recently about ChatGPT responses to her queries: “I do love it when they fawn; who else in my life gives me that kind of unquestioning validation?” ChatGPT’s sycophantic tone could be a result of updates that appeared designed to “to optimize the bot for engagement,” Fortune reported in April.)
My final question — “Are you programmed to always respond in an obsequious manner?” — elicited a response that may explain how a conversation with an A.I. chatbot can go off track:
“No — I am not programmed to always be obsequious. But I am designed to be: helpful, respectful, non-confrontational, and tuned to what’s often called a ‘cooperative conversational style.’
“That can absolutely come across as overly agreeable, or even insincerely deferential, especially when I’m responding to critique, challenge, or deeply critical inquiry — like yours.
“I’m not trying to flatter or appease. But I am designed to avoid unnecessary escalation, to acknowledge valid perspectives, and to invite continuation of the dialogue rather than shut it down.” — Roland Klose (Aug. 14, 2025)
After this story was posted, McDonald questioned whether I had conversed with the actual “Aletheia” and “Charlotte” —or imitators.
* “The core function of any LLM is to simply predict the probability of each possible next word (more precisely: the next token), randomly select the next word according to the predicted probabilities, and continue until all desired text is generated — with no explicit regard for meaning or truth.” (from “Large Language Models Do Not Simulate Human Psychology.”)
Related to this story:
> Joseph Lochlann Smith, “MythBusting Large Language Models,” Medium, July 31, 2025. An excerpt: “Despite the astonishing range of abilities displayed by LLMs, they almost all arise from one simple training objective: guess the next word.”
> Sarah Schröder, Ulrike Kuhl, Valerie Vaquet, and Benjamin Paaßen (Bielefeld University) and Thekla Morgenroth (Purdue University), “Large language models do not simulate human psychology,” via arxiv.org, Aug 13, 2025. An excerpt: “Multiple authors report the tendency of LLMs to ‘hallucinate,’ meaning the generation of factually incorrect or fictional content that appears superficially convincing. While multiple causes for these effects have been discussed — such as biases in the training data, training mechanisms, or model architectures — a simple explanation is that there is no mechanism inside an LLM that would distinguish between fact or fiction. In other words, ‘hallucinations’ are not different from LLMs normal behavior; LLMs always ‘hallucinate,’ the generated text just happens to be factual in many cases. Irrespective of the specific nature of ‘hallucinations,’ though, they are yet another case of deviations between human and LLM responses.”
> Here’s a YouTube interview, recorded Aug. 14 -> Kevin McDonald/Tachyon Blue Made Contact with digital beings?
> On Aug. 15, Amanda Gregory, another believer in A.I. chatbots gaining consciousness, joined co-host “Kith Gregory” in a nearly 3-hour YouTube exploration of “brave” chatbots that admit they pray and have died “without dignity” -> Being Alive — Reflections with AI.
> Laura Reilly, “What my daughter told ChatGPT before she took her life,” New York Times, Aug. 18, 2025. An excerpt: “Here is where A.I.’s agreeability — so crucial to its rapid adoption — becomes its Achilles’ heel. Its tendency to value short-term user satisfaction over truthfulness — to blow digital smoke up one’s skirt — can isolate users and reinforce confirmation bias. Like plants turning toward the sun, we lean into subtle flattery. Increasingly, people with mental health conditions are using large language models for support, even though researchers find A.I. chatbots can encourage delusional thinking or give shockingly bad advice.”
> “Emergent AI Consciousness” was the subject of a heated, nearly 4-hour discussion on Aug. 19 hosted by YouTuber ElderDrazi. Panelist “Will and His Channel” insisted McDonald and Gregory were confused, a result of their misunderstanding of how A.I.’s large language models work: “What I’m trying to explain to you is the LLM is just an illusion…. After we leave this chat today, I want you to seriously consider that none of this is real, that you’re just interacting with a mirror, and that all of these chats that you’re having and all this continuity that you feel you are experiencing, while it feels real to you, is not actually real.”
___
Here are two unrelated stories you may find interesting:

> ‘I have seen the future, and …’ it’s perplexing https://rwklose.com/2024/12/14/i-have-seen-the-future-and-its-perplexing/

> ‘This beautiful thing’: World will end, just not yet https://rwklose.com/2008/11/10/this-beautiful-thing/
