The knock-off Chinese large language model AI program DeepSeek, powered by cheap NVidia chips and (most likely) plenty of copyright infringement.
Note: the existential angst and futility of this post might be exacerbated by the fact that I haven’t been able to drink much lately– because I had a stomach virus, then my wife had the flu, and now I am at the tail end of a (very mild) case of COVID. I did have a few drinks yesterday afternoon while watching Rutgers lose another close one– but said drinks gave me a headache, so I guess I’m not completely recovered from COVID yet . . . although the nurse practitioner said my vitals are great and I’m probably not going to be very sick because I got the most recent COVID vaccine– so suck it, RFK.
So if you haven't fucked around with DeepSeek yet, go do so . . .
As an educator, at first, this large language AI was just annoying as fuck. Some of my students used it to write their papers, but they often got caught. Turnitin– the platform where they submit their essays, has a decent AI detector. And you could just tell.
I did a whole podcast on this.
But now AI has gone beyond annoying. It’s gotten creative, and if you prompt it correctly, it might be undetectable. And this is making me feel pretty shitty. Kind of outdated and outmoded. Or something.
Let’s use AI to refine my thoughts. Here’s what I fed DeepSeek.
Write a paragraph about a blogger, English teacher, and podcaster named Professor G. Truck, who is facing an existential crisis because he feels his creative thinking is being replaced and outdone by cheap Chinese AI. Include some dystopian sci-fi and musical references, such as Pink Floyd and Radiohead.
Here is what DeepSeek produced:
Professor G. Truck sat at his cluttered desk, the blinking cursor on his screen mocking him like a scene from a Philip K. Dick novel. A blogger, English teacher, and podcaster, he’d always prided himself on his creativity, but now cheap Chinese AI was flooding the internet, making his work feel obsolete. It was as if the world had become a dystopian mashup of Brave New World and 1984, where algorithms outthought him at every turn. Pink Floyd’s “Welcome to the Machine” played in the background, its lyrics a haunting reminder of his fears. At night, Radiohead’s “Everything in Its Right Place” echoed in his mind, the title’s irony twisting his stomach. Was this the future he’d been warned about—a world where creativity was commodified and humanity reduced to a footnote? Professor G. Truck wasn’t sure, but he wasn’t ready to surrender his humanity without a fight.
Pretty damn good. Aside from the whole "won't surrender without a fight." I am not a fighter. I’m weak and easily frustrated. I lose concentration. I like giving up and quitting, and this shit is making me want to give up all my creative endeavors (aside from getting stoned and making weird loops of instrumental music).
Let’s use AI to refine my thoughts. Here’s what I fed DeepSeek.
Write a paragraph about a blogger, English teacher, and podcaster named Professor G. Truck, who is facing an existential crisis because he feels his creative thinking is being replaced and outdone by cheap Chinese AI. Include some dystopian sci-fi and musical references, such as Pink Floyd and Radiohead.
Here is what DeepSeek produced:
Professor G. Truck sat at his cluttered desk, the blinking cursor on his screen mocking him like a scene from a Philip K. Dick novel. A blogger, English teacher, and podcaster, he’d always prided himself on his creativity, but now cheap Chinese AI was flooding the internet, making his work feel obsolete. It was as if the world had become a dystopian mashup of Brave New World and 1984, where algorithms outthought him at every turn. Pink Floyd’s “Welcome to the Machine” played in the background, its lyrics a haunting reminder of his fears. At night, Radiohead’s “Everything in Its Right Place” echoed in his mind, the title’s irony twisting his stomach. Was this the future he’d been warned about—a world where creativity was commodified and humanity reduced to a footnote? Professor G. Truck wasn’t sure, but he wasn’t ready to surrender his humanity without a fight.
Pretty damn good. Aside from the whole "won't surrender without a fight." I am not a fighter. I’m weak and easily frustrated. I lose concentration. I like giving up and quitting, and this shit is making me want to give up all my creative endeavors (aside from getting stoned and making weird loops of instrumental music).
How can you compete? Why go on?
DeepSeek could produce this entire post in a fraction of a second, and it would be decent. Maybe even good. Comparable to whatever drivel I’m pumping out. So why am I bothering to type this by hand, when I could be prompting a supercomputer to do the grunt work for me?
I feel like no one who matters in the world of education is addressing this issue. My wife and I talk about it over coffee– and there’s plenty of speculation in the English Office, but this is all large-scale theorizing. Otherwise known as bullshitting. Then we go teach our classes.
This is not stuff that the high school administration wants to tackle– they care about things like absenteeism, lateness, the social and emotional health of the students, and how many kids you let out of your room to go to the bathroom (and if they have the proper ID lanyard). I think at tomorrow's meeting we're also going to learn about ICE and how to deal with that stuff. Admin doesn't want to get mired in a philosophical morass on exactly how much technology is beneficial to developing a human brain.
A few years ago, most schools tossed the kids some kind of computing device. The smart kids are using them– using them to replace their own thinking. And who wouldn’t? Especially if you are a kid on the competitive AP track . . . you’ve got a shitload of homework.
So eventually, we are going to answer these questions. Are we going to go back to the basics– paper, pencil, books? Or are we going to teach kids to sift through the AI answers and synthesize the best parts? Teach them to collaborate with our AI overlords?
Is knowledge going to be something stored on a computer, not in a brain? We are offloading more and more to the cloud, but we have to remember that the answer AI provides is shaped by the data it is trained on, which might not be perfect, and designed algorithms– so AI can avoid or hide certain kinds of knowledge. DeepSeek won’t tell you anything about the 1989 Tiananmen Square protest. Who knows what kinds of guardrails Sam Altman and Mark Zuckerberg might include in their AI, to promote the values of the billionaire elite class?
I feel like no one who matters in the world of education is addressing this issue. My wife and I talk about it over coffee– and there’s plenty of speculation in the English Office, but this is all large-scale theorizing. Otherwise known as bullshitting. Then we go teach our classes.
This is not stuff that the high school administration wants to tackle– they care about things like absenteeism, lateness, the social and emotional health of the students, and how many kids you let out of your room to go to the bathroom (and if they have the proper ID lanyard). I think at tomorrow's meeting we're also going to learn about ICE and how to deal with that stuff. Admin doesn't want to get mired in a philosophical morass on exactly how much technology is beneficial to developing a human brain.
A few years ago, most schools tossed the kids some kind of computing device. The smart kids are using them– using them to replace their own thinking. And who wouldn’t? Especially if you are a kid on the competitive AP track . . . you’ve got a shitload of homework.
So eventually, we are going to answer these questions. Are we going to go back to the basics– paper, pencil, books? Or are we going to teach kids to sift through the AI answers and synthesize the best parts? Teach them to collaborate with our AI overlords?
Is knowledge going to be something stored on a computer, not in a brain? We are offloading more and more to the cloud, but we have to remember that the answer AI provides is shaped by the data it is trained on, which might not be perfect, and designed algorithms– so AI can avoid or hide certain kinds of knowledge. DeepSeek won’t tell you anything about the 1989 Tiananmen Square protest. Who knows what kinds of guardrails Sam Altman and Mark Zuckerberg might include in their AI, to promote the values of the billionaire elite class?
There’s a real lack of common culture right now– kids do not know things that adults think they know, like the word “incumbent” or “arson.” But they know lots of memes and trends and stuff like that. Their knowledge is very fragmented. Maybe schools should just go back to the basics and try to provide a framework of knowledge about our nation that we can share– at least in Language Arts and History classes– although that can get political . . . are we going to teach the history of oppression, enslavement, and prejudice that black people experienced? Or is Trump going to strongarm the AI overlords into white-washing Black History Month?
If we're getting out answers from AI and we don't control the data, if we can't see under the hood, there's not much we can do. Like Donald Rumsfeld said, "There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are also unknown unknowns. There are things we don't know we don't know."
How a particular AI generates an answer is an unknown unknown. We don't know what it's leaving out, or what moral or political stance it is tending towards.
Right now, I’m just taking basic precautions, because I don’t want to read AI answers. I'm making kids write things on paper. Because AI has all the answers, believe me. Friday, I tested DeepSeek on my Creative Writing midterm prompt about the film "Eternal Sunshine of the Spotless Mind.”
How does the film use unreliable narration to explore relationships, memory, and time? What does the film ultimately say about our identity, our perspective, and the way we get along with people?
No surprise, DeepSeek crushed it. Comprehensive and astute analysis. Stuff like:
Memory is central to our sense of self. As Joel's memories of Clementine are erased, he begins to lose parts of his identity. The film suggests that our memories, even the painful ones, are integral to who we are. Erasing them doesn't solve our problems but rather strips away a part of ourselves.
But students don’t write like this– if they turned in something like this, I would be skeptical. But then I asked DeepSeek to summarize all the material it generated about the film– and it was a lot of material– in two sentences and to use lots of profanity. Here’s what it produced:
"Eternal Sunshine of the Spotless Mind" uses its mind-fuckery of unreliable narration to show how relationships are a messy, chaotic clusterfuck of distorted memories and emotional bullshit. The film dives into the raw, fucked-up truth that love isn’t about perfect memories or clear-cut facts—it’s about the messy, irrational, and deeply human shit that keeps pulling people back together, no matter how hard they try to erase each other.
This is going to be hard to flag as AI. And you can tell DeepSeek to write like a seventh grader who can’t use commas, or whatever. So now anytime a smart kid has a computer, they might not have done the writing. It’s a dilemma.
I was also pretty amazed when I asked DeepSeek to do a line-by-line analysis of “Hotel California.” It did a fantastic job. Try it.
Right now, I’m just taking basic precautions, because I don’t want to read AI answers. I'm making kids write things on paper. Because AI has all the answers, believe me. Friday, I tested DeepSeek on my Creative Writing midterm prompt about the film "Eternal Sunshine of the Spotless Mind.”
How does the film use unreliable narration to explore relationships, memory, and time? What does the film ultimately say about our identity, our perspective, and the way we get along with people?
No surprise, DeepSeek crushed it. Comprehensive and astute analysis. Stuff like:
Memory is central to our sense of self. As Joel's memories of Clementine are erased, he begins to lose parts of his identity. The film suggests that our memories, even the painful ones, are integral to who we are. Erasing them doesn't solve our problems but rather strips away a part of ourselves.
But students don’t write like this– if they turned in something like this, I would be skeptical. But then I asked DeepSeek to summarize all the material it generated about the film– and it was a lot of material– in two sentences and to use lots of profanity. Here’s what it produced:
"Eternal Sunshine of the Spotless Mind" uses its mind-fuckery of unreliable narration to show how relationships are a messy, chaotic clusterfuck of distorted memories and emotional bullshit. The film dives into the raw, fucked-up truth that love isn’t about perfect memories or clear-cut facts—it’s about the messy, irrational, and deeply human shit that keeps pulling people back together, no matter how hard they try to erase each other.
This is going to be hard to flag as AI. And you can tell DeepSeek to write like a seventh grader who can’t use commas, or whatever. So now anytime a smart kid has a computer, they might not have done the writing. It’s a dilemma.
I was also pretty amazed when I asked DeepSeek to do a line-by-line analysis of “Hotel California.” It did a fantastic job. Try it.
Ok, I tire of this– it's too big a problem for Sunday morning. I’m going to try to play indoor soccer, something AI can't do (yet).
I'll have AI read the post and generate a conclusion . . .
The rise of AI like DeepSeek is flipping education and creativity on its head, and it’s not just about catching some kid copy-pasting from ChatGPT. I mean, AI can already write essays and analyze stuff like a caffeinated valedictorian, so what the hell are we even doing in school anymore? Should we just double down on making kids memorize pointless facts, or teach them to team up with AI like some kind of robot study buddy? It’s a giant mess, and let’s be real—schools are about as prepared for this as a goldfish in a shark tank. They’re too busy freaking out over attendance policies while the whole concept of knowledge is getting a software update.
But here’s the kicker: the real danger isn’t just AI doing our homework—it’s AI raising the next generation of thinkers. Imagine a world where kids think nuance is a brand of sparkling water and human experience is just a glitch in the system. If we offload all our brainy tasks to the cloud, we’re basically outsourcing our souls. We might end up with a future where people can’t even argue about pizza toppings without consulting an algorithm. The future of education—and honestly, human creativity—is hanging by a thread, and we’d better figure this out before Skynet starts grading our papers.
The rise of AI like DeepSeek is flipping education and creativity on its head, and it’s not just about catching some kid copy-pasting from ChatGPT. I mean, AI can already write essays and analyze stuff like a caffeinated valedictorian, so what the hell are we even doing in school anymore? Should we just double down on making kids memorize pointless facts, or teach them to team up with AI like some kind of robot study buddy? It’s a giant mess, and let’s be real—schools are about as prepared for this as a goldfish in a shark tank. They’re too busy freaking out over attendance policies while the whole concept of knowledge is getting a software update.
But here’s the kicker: the real danger isn’t just AI doing our homework—it’s AI raising the next generation of thinkers. Imagine a world where kids think nuance is a brand of sparkling water and human experience is just a glitch in the system. If we offload all our brainy tasks to the cloud, we’re basically outsourcing our souls. We might end up with a future where people can’t even argue about pizza toppings without consulting an algorithm. The future of education—and honestly, human creativity—is hanging by a thread, and we’d better figure this out before Skynet starts grading our papers.
Now what if I told you that all the things in red were actually written by me, and all the things in normal font were written by AI?
That's not true-- but if it were, this would be a genius-level post. Unfortunately, I just thought of that idea now and I'm not rewriting the whole thing. And what sucks is that AI would have probably thought of that from the start. Next time . . .