Academic Existence in the Age of Artificial Intelligence
- Veysel Bozkurt

- 4 hours ago
- 14 min read
By Veysel Bozkurt
For over a decade, I've been searching for an artificial intelligence (AI) expert to analyze large-scale datasets. I had several interviews but didn't get any results. ChatGPT arrived in late 2022 and became a beacon of hope. Then came Claude, Gemini, Perplexity... Each with a different skill set, a different personality.
As an academic, watching this rapid development was incredibly exciting at first. Until then, AI had always been “something coming soon” in academic life. Now, it had become part of our daily practice. I had about three years left until retirement age. If everything went as planned, my intention was to write the books I had been thinking about for years during my retirement. At first, I thought AI could accelerate this process. Just imagine: you load hundreds of thousands of pages of information into artificial intelligence and write higher-quality books based on a much broader literature. I could scan more articles than I could ever read on my own, request comparisons, and establish theoretical connections that I felt were missing.
I had no economic expectations from these books, nor would I benefit from academic evaluation incentive awards. There was no publication incentive system for retirement anyway. In fact, I would have to cover the costs of accessing all that digital material from my limited retirement pension. For an academic who had spent his life in bookstores since adolescence and who had invested money saved from his food budget during his student days in books, it was nice to dream like this in the twilight of his life.
Books, like for many of my colleagues, had always been part of my identity. The shelves covering the walls of my home were, in a way, my intellectual biography. In my retirement, I could perhaps add my new books to those shelves. However, after a while, as I saw the advances in artificial intelligence, a worm was planted in my mind.
Yes, I could now write a book that I had been thinking about for almost twenty years and that would take me about four years to complete in my retirement in about four months if I had access to the necessary digital documents. Maybe even less. Seeing the literature syntheses produced by artificial intelligence, I realized that my months-long note-taking process might now be unnecessary. The machine could read hundreds of articles, extract the main arguments, identify differences in opinion among authors, and even suggest missing theoretical connections.
So, with this collaboration, would the book I wrote in four months be mine, or would it belong to artificial intelligence? When this question first popped into my head, I easily dismissed it. “Don't be ridiculous,” I told myself, “the idea is yours, the structure is yours, the knowledge accumulated over years is yours. Artificial intelligence is just a tool.” But is it really just a tool? A pen, a word processor, even a research assistant can be defined as “just a tool.” But artificial intelligence does something different. It seems to think. It synthesizes. Sometimes it makes connections that don't occur to me. When I read the sentences it suggests, I say, “That's exactly what I wanted to say, but I didn't know how to express it.”
In that case, who is the author? Me or it?

I asked this question at many conferences I've given recently, to audiences mostly made up of academics and students. The answers were generally reassuring. “You generate the ideas, the structure is yours, the critical evaluation is yours. Artificial intelligence just speeds things up,” they said. I thought so too. I often reminded my conference audiences: “In an age when machines have been invented, how right is it to tell people to dig with shovels and pickaxes?” There would be laughter in the hall. It seemed like a logical analogy. But logic doesn't always convince the heart.
I confess, I still wasn't at ease. After more than forty years of academic work, I was afraid of people saying, “Look, he had AI write his book.” No matter how much academia talks about universal standards, it is one of the places where labeling is most intense. A whisper like, “You know that prolific professor? Turns out he had AI do all of it,” would wipe out years of work. I also saw that it wouldn't be long before AI surpassed me in writing. Productive AI didn't make some of the writing mistakes I did. Sentence structures were more fluid. Paragraph transitions were smoother. Argument presentation was clearer. Of course, concepts like depth, originality, and “voice” were still human traits. But how measurable are these? How well can a reader distinguish the warmth of a sentence or the originality of a thought?
This concern of mine was reinforced when I watched a talk by Yuval Noah Harari in January 2026. Harari said he was working on a new book and that it would probably be his last. Why? Because he believed it wouldn't be long before artificial intelligence surpassed him. “Artificial intelligence can now tell stories, manipulate people's emotions. For millions of years, storytelling was a uniquely human skill. Not anymore,” he said. For a moment, I froze in front of the screen.
You know Harari. “Sapiens,” “Homo Deus,” “21 Lessons for the 21st Century,” “Nexus”... His books have been translated into dozens of languages and read by millions. Whether you agree with his ideas or not, he is one of the most widely read figures in contemporary thought. TED talks, Davos panels, advising heads of state... And now this figure was announcing the end of his writing career. He didn't explicitly say he'd been defeated by the machine, but the implication was clear. If a globally renowned author was facing this reality, what would become of an academic like me, searching for a post-retirement pursuit?
The Disappearance of Books from My Life
This question pointed to a crisis much deeper than losing a “leisure activity.” For me, writing books was not just a hobby; it was central to my academic identity. I had come to know my professors through their books. In a sense, an academic is the sum of their works. Articles are important, of course, but in the social sciences, books occupy a different place. A book is the most concrete way of saying, “I was here, I thought these things, I said these things.”
Now I was faced with the possibility of this path closing. Or rather, the path is open, but I don't know if it's me walking there or an algorithm dancing on my fingertips. The identity crisis created by artificial intelligence is not just for professors like me who are approaching retirement, but for the entire academy. Is a young doctoral student using artificial intelligence when writing their first article? If they are, is this considered “cheating”? If they are not, how will they compete with those who are? In an associate professorship application, one candidate has written 50 AI-assisted papers, while the other has written 10 using traditional methods. The second candidate's papers may be more profound, but how will the jury understand this?
What will happen when artificial intelligence makes productivity limitless, while academic evaluation criteria are still based on “numbers”?
An Irreversible Transformation
Artificial intelligence is here to stay. This is not a temporary trend. Just as the advent of the internet in the 1990s changed academic life, artificial intelligence is changing it now. But the difference is that the internet made access to information easier, while artificial intelligence seems to be producing information. This is a completely different level.
Current evaluation criteria are becoming obsolete. Academic journals operate with a peer review and publication process that takes more than two years. As someone who has served as an editor for various academic journals for many years, not just as an author, I know the inner workings of publishing. You request evaluations from at least two reviewers. It's a well-intentioned system, but incredibly slow. It's difficult to find reviewers who are experts in the field and take their work seriously. You have no source to pay reviewers for their work. If you don't know them personally, less than ten percent of those you ask to review will comply. The review process for an article takes at least six months, sometimes over a year. If the article is lucky, it reaches readers an average of one and a half to two years after it is written. Meanwhile, artificial intelligence is releasing a new version every day. Faced with exponential growth, how can human linear and slow production survive?
Many journals have now started using artificial intelligence detection software. Systems like Turnitin attempt to estimate how much of the text was written by artificial intelligence. But this is a cat-and-mouse game. Researchers are using other artificial intelligence to “humanize” artificial intelligence text. One machine humanizes what another machine wrote, then another machine tries to detect it. It's a strange cycle. The current criteria of the Council of Higher Education (YÖK) and universities are quantity-based. How many articles do you have, how many citations have you received, what is your h-index? These criteria were already problematic; they did not fully measure quality. Now, they are completely collapsing. If a researcher can produce 20 articles a year with AI, they appear very “successful” according to the old criteria. But what have they actually produced? How will the system measure the production of “original ideas”? Until we find an answer to this question, a kind of academic inflation is inevitable.
Students and the New Generation
Today's undergraduate students don't know a world without artificial intelligence. For them, this is normal. But does this normality dull their critical thinking skills? Does a student who has all their assignments done by artificial intelligence really learn? Or do they just gain the skill of “prompt writing”?
We often talk with our colleagues. They mention that all the term papers they assign to their students are prepared entirely by artificial intelligence, and that students cannot answer questions about their own assignments. A study conducted at MIT last year also highlighted a similar problem; students who wrote papers with artificial intelligence had lower brain activity and remembered less about the assignments they prepared.
In a world where academics have made artificial intelligence part of their work, can students be told “don't use it”? And how will the academic development of students who have their assignments written by artificial intelligence without putting in any effort be ensured? In the coming period, artificial intelligence may reduce the need for academic staff at universities. A professor, with an artificial intelligence assistant, can do the work that used to be done by three professors. Research assistant positions may become redundant. Tasks such as data analysis and literature reviews can now be performed by machines. Will it become more difficult for young researchers to enter the system? Or will artificial intelligence democratize academia, providing opportunities to young minds without access to resources? Both are possible. But we don't know which scenario will play out.
What Would Bourdieu Say?
Bourdieu defined the academic field as an arena of “power struggles.” Academics try to gain position within the field by using different types of capital (cultural, social, economic). Your list of publications, citations, the university you attend, your relationships with your professors... These constitute your academic capital. So, is artificial intelligence creating a new type of capital? “Digital capital”? “Algorithmic capital”?
Whoever can write better prompts, whoever can use artificial intelligence more effectively, will they rise within the field? If so, how does Bourdieu's concept of “habitus” fit in here? Habitus refers to the practices, behavioral patterns, and tastes we internalize during our socialization process. But what will the habitus of a generation socialized with artificial intelligence look like? For them, “producing knowledge” will have a different meaning. Traditional academic capital was built on “years of education, disciplined reading, theoretical accumulation.” Now someone can chat with ChatGPT over a weekend and get the summary of what you've learned in ten years. Of course, it's a summary, not depth. But in today's world, how many people want depth? The social media age rewards speed and superficiality. Is intellectual capital shifting from ‘knowing a lot’ to ‘producing quickly’?
Social media had already begun to transform academia. Now, an academic's “visibility” is measured not by the quality of their publications, but by their number of Instagram followers or viral tweets on Twitter. The professor who explains Bourdieu in 30 seconds on TikTok is better known than the professor who writes books. This is the digital extension of what Bourdieu called “field struggle.” Who is the “real” academic now? The one who still publishes in peer-reviewed journals, or the one who garners millions of likes?
Facing a Machine Faster Than Humans
This is the most uncomfortable question. If artificial intelligence can synthesize more quickly, comprehensively, and with fewer errors, what does human academic labor mean? We still believe in the value of “being human.” But how resilient is this belief in the face of a pragmatic world? Perhaps the role of the human academic will change. We will no longer be “knowledge producers” but “knowledge curators” or “ethical evaluators.” We will take on a critical role, questioning the knowledge produced by artificial intelligence, placing it in its social context, and evaluating it ethically. But is this a satisfactory answer?
The Editor's Desk in 2030
Let's conduct a thought experiment. It's 2030. There are 200 articles on a journal editor's desk. 180 of them were written with AI support, 20 were written entirely by humans. Can the editor tell the difference? Should they? At first glance, AI-assisted articles may appear “cleaner.” There are no grammar mistakes, sentence structures are flawless, literature reviews are comprehensive, and references are properly formatted. But what about the content?
AI excels at synthesizing existing knowledge. It can scan hundreds of articles and extract common themes, bringing together the arguments of different authors. But can it say anything “new”? The answer to this question depends on how we define the concept of “originality.” If originality means “establishing a connection that has never been made before,” then artificial intelligence can do that. It can see patterns within billions of texts and catch connections that the human mind might overlook. But if originality means “insight born from lived experience,” then artificial intelligence has its limits. Because it cannot experience. It only processes data.
Let's return to the editor. While reading those 200 articles, how will they determine which ones are AI-generated? Perhaps they will use detection software. But that software isn't perfect either. It's a kind of arms race. In the end, the editor may give up on detection and ask something else: "Does this article contribute to the field? Is it worth reading?“ If the answer is ”yes,“ does it really matter who wrote it? For example, I might write an AI prompt like this: ”Analyze the similarities and differences between Bourdieu's concept of habitus and Giddens's theory of structuration." The AI generates a text. There's no plagiarism, no repetition of quotes. But it synthesizes the ideas of two thinkers and makes a comparison. Is this synthesis “original”? In a sense, yes, because no text has ever produced this exact combination before. But in a sense, no, because I didn't produce these ideas; I just told the AI which ideas to combine. Who is the owner of the “original idea”? Me, the AI, or Bourdieu and Giddens?
When the answers to these questions become unclear, the fundamental values of academia are shaken.
When Knowledge Production Becomes Everyone's Job
The traditional answer was clear. Knowledge production was the job of academics, scientists, and experts. In the age of AI, this job is becoming accessible to everyone. A high school student can prepare a research report with artificial intelligence. An entrepreneur can conduct market analysis. A retiree can write a history book.
Is this good or bad? Both.
It's good because knowledge is no longer the monopoly of an elite class. If everyone can produce knowledge, diversity increases, and different voices are heard. It's bad because quality control weakens. Misinformation, misleading information, and manipulative information stemming from AI hallucinations also proliferate. The concept of “expertise” erodes. “My opinion is just as valuable as yours” is said. But in some areas, true expertise is necessary. Perhaps academia's new role will not be to produce knowledge but to evaluate it; that is, to take on the role of a “guardian of truth.” Taking the information in circulation and questioning “is this true, is it reliable, how was it produced, what is its bias, what are its limitations?” This is a new role and perhaps a much more important one. Because in the age of information abundance, the real problem is not a lack of information, but information pollution. People are drowning in information but don't know what to believe. Academics can be a compass in this chaos.
But for this to happen, academia must reinvent itself. If it operates with the mindset of “we produce knowledge, you consume it,” it will be left behind. If it evolves into a role that says, “we evaluate knowledge and provide guidance,” it will remain indispensable in the age of artificial intelligence. Research shows that the vast majority of researchers today actively use artificial intelligence. However, most do not openly admit this. They fear that their articles will be seen as “worthless” and that they will be judged as “not real scientists.” If everyone knows that this article was written by artificial intelligence, is that text still ‘your’ work? Is it still that “unique” thing?
Open Questions and Uncertainties
Where is academia, where are universities headed in the age of artificial intelligence? There is no clear answer to this question. At one extreme, there is a dystopian future where universities are completely shut down, and knowledge is produced and distributed by technology companies. At the other extreme, there is a utopia where universities become “centers of human wisdom,” places that translate the raw knowledge produced by artificial intelligence into human values. The reality will probably lie somewhere in between.
Will I write those books after retirement? I don't know. Maybe I'll start writing, maybe I'll give up. Maybe I'll write with artificial intelligence, maybe with my hands. Maybe I'll mix the two. But one thing I am sure of is that if I write, those books will be “mine.” Because I won't just put words on those pages. I will put experience, I will put memory, I will put emotion. Artificial intelligence cannot provide these. Young academics will live in a much more difficult world than we do. Our generation saw artificial intelligence in the later stages of our lives, and we had to adapt. The new generation is growing up with artificial intelligence; for them, it's “normal.” But this normality won't make their jobs easier. On the contrary, when everyone uses the same tools, competition will be fiercer. It will be harder to stand out. Perhaps they will create a new definition of “academic” that is different from ours. That's natural. Every generation finds its own way.
Will universities still exist in twenty years? Some say they are “dying.” I don't believe that. But I do believe they will transform. Perhaps campuses will shrink, classes will become hybrid, and degree systems will change. But the need to come together to think, discuss, and learn will not disappear. People cannot learn everything by staring at a screen alone. Social interaction is still necessary.
How do I feel?
Fear. Yes, there is fear. Because we are heading into the unknown. The academic world I have known for forty years is transforming before my eyes. Sadness. Yes, there is sadness. Because beautiful things are disappearing. Those slow readings, those laborious writings, those long contemplations... These may never happen again. But there is also curiosity. A new world is opening up. Perhaps this world will be more interesting, richer, more surprising.
And there is hope. Because history has taught us that after every technological transformation, humanity has reinvented itself. The invention of the printing press did not kill book culture; it transformed it, democratized it. The invention of the camera did not kill art; it carried it to new mediums. The invention of the computer did not kill thinking; it accelerated it, made it more complex. Artificial intelligence may not kill academia either. It will transform it. And our task in this transformation is not to panic, but to steer the course.
Humans ask questions. Machines provide answers. But it is still humans who evaluate the answers.
If this balance is disrupted, we lose. If it is preserved, we win. Let me return to the question I asked at the beginning of this article. “Who will write my last book, me or a machine?” Now, at the end of the article, the answer is clearer. Both of us.
But “both of us” does not mean “half and half.” I will provide the idea. I will build the structure. I will do the critical evaluation. I will contribute my experience. Artificial intelligence will add speed, expand the literature, and offer different perspectives. The resulting book will be neither entirely “mine” nor entirely “its.” But that's not a problem. Because no book has ever belonged entirely to its author. Every writer carries a language, a culture, a tradition inherited from the past. Every writer is nourished by what they have read from others. The only difference is that now an algorithm has been added to these “others.” What matters is that when someone reads that book, they don't need to ask, “Did artificial intelligence write this, or did a human?” What matters is that the reader asks, “Did I learn something from this book?” If the answer is “yes,” then that book is successful.
Final note
As I finish this article, artificial intelligence is at my fingertips. It corrected some sentences and expanded some paragraphs. But I am writing these lines. Because these are the moments when an academic, on the verge of retirement, looks back on forty years of work and asks, “What have I done? What will I do?” These moments belong to me. And I don't want to share them with anyone.
I hope that academia will cease to be the monopoly of knowledge and become the source of wisdom.
I hope that young researchers will become stronger by using machines as tools rather than being crushed by them.
And I hope that forty years from now, another experienced academic will say in another article: “In 2026, everyone thought it was over. But we started again. And we did much more beautiful things.”
Humans have always found a way. And they will continue to do so.




Comments