Several recent articles have described text-generating AIs like GPT3 and ChatGPT:
- AI bot ChatGPT stuns academics with essay-writing skills and usability
- The Undergraduate Essay Is About to Die: Nobody is prepared for how AI will transform academia
I have been hearing for a while that the best/only way to deal with the enormous problem of plagiarism in university essays is to have more in-class exam essays and fewer take-home essays.
With huge classes and so many people admitted without strong English skills, it is already virtually impossible to tell the difference between students struggling to write anything cogent and some kind of automatic translation or re-working of someone else’s work. It’s already impossible to tell when students have bought essays, except maybe in the unlikely case that they only cheat on one and the person grading notices how it compares to the others. Even then, U of T only punishes people when they confess and I have never seen a serious penalty. If we continue as we are now, I expect that a decent fraction of papers will be AI-written within a few years. (Sooner and worse if the university adopts AI grading!)
After a few more pet elegies—in haiku; in ersatz 15th-century French—I turned to something a bit more practical. The online buzz from fellow academics that the bot was good enough to get passing grades on assigned essays. So I gave it a question from a midterm I gave my Journalism 101 class this semester. Its answer would’ve probably earned a C or C-. It had a few serious misunderstandings of the material, but it wasn’t totally off the mark. The program had inputted the question, parsed it, extracted the desired task, and output a poor-quality but appropriate answer—the sort of stuff that wouldn’t be out of place in a pile of test papers that unprepared undergraduates would produce. In other words, it would’ve had a decent shot of passing the Turing test.
https://slate.com/technology/2022/12/davinci-003-chatbot-gpt-wrote-my-obituary.html
ChatGPT arrives in the academic world
https://boingboing.net/2022/12/19/chatgpt-arrives-in-the-academic-world.html
AI art and text generators are all the rage right now. As an academic, I’ve seen an uptick in colleagues issuing warnings about students using tools like ChatGPT to create assignments, but I haven’t yet really done too much investigation—I’ve been too busy grading final papers! But I recently came across two posts by academics that somewhat relieve the immediate worry about students successfully using ChatGPT to write their papers, and also raise challenges for educators about what we are actually doing in our classrooms.
Yet my writing career could still go the way of the grocery checkout jobs eliminated by automation. Al tools will keep getting smarter, and distinguishing an AI-written op-ed from a “real” human op-ed will get harder over time, just as AI-generated college papers will become harder to distinguish from those written by actual students.
As a writer and professor, that makes for a dystopian future. (I promise this sentiment was not generated by AI.)
https://www.cnn.com/2022/12/26/opinions/writing-artificial-intelligence-ai-chatgpt-professor-bergen/index.html
NYC Bans Students and Teachers from Using ChatGPT
The machine learning chatbot is inaccessible on school networks and devices, due to “concerns about negative impacts on student learning,” a spokesperson said.
https://www.vice.com/en/article/y3p9jx/nyc-bans-students-and-teachers-from-using-chatgpt
“At the same time, some teachers are reportedly “in a near-panic” about the technology enabling students to cheat on assignments, according to the Washington Post. The New York Times recently showed writers and educators samples of ChatGPT’s writing side-by-side with writing by human students, and none of them could reliably discern the bot from the real thing.”
Top AI Conference Bans Use of ChatGPT and AI Language Tools To Write Academic Papers
https://m.slashdot.org/story/409176
College Student Made App That Exposes AI-Written Essays – Slashdot
https://m.slashdot.org/story/409226
AI experts are increasingly afraid of what they’re creating
AI gets smarter, more capable, and more world-transforming every day. Here’s why that might not be a good thing.
Research Summaries Written By AI Fool Scientists – Slashdot
https://m.slashdot.org/story/409532
To play the game, Cicero looks at the board, remembers past moves and makes an educated guess as to what everyone else will want to do next. Then it tries to work out what makes sense for its own move, by choosing different goals, simulating what might happen, and also simulating how all the other players will react to that.
Once it has come up with a move, it must work out what words to say to the others. To that end, the language model spits out possible messages, throws away the bad ideas and anything that is actual gobbledygook, and chooses the ones, appropriate to the recipients concerned, that its experience and algorithms suggest will most persuasively further its agenda.
https://www.economist.com/science-and-technology/2022/11/23/another-game-falls-to-an-ai-player
Anti-Plagiarism Service Turnitin Is Building a Tool To Detect ChatGPT-Written Essays
https://m.slashdot.org/story/409908
Scores of Stanford Students Used ChatGPT on Final Exams, Survey Suggests
https://m.slashdot.org/story/409888
Faced with criticism it’s a haven for cheaters, ChatGPT adds tool to catch them
https://www.cbc.ca/news/business/chatgpt-academic-cheating-1.6732115
ChatGPT isn’t the first research-paper-writing machine to drive journal editors to distraction. For nearly two decades, computer science journals have been plagued with fake papers created by a computer program written by MIT grad students. To use this program, named SCIgen, all you have to do is enter one or more names and, voilà, the program automatically spits out a computer science research paper worthy of submission to a peer-reviewed journal or conference. Worthy, that is, if none of the peer reviewers bothered to actually read the paper. SCIgen-written articles were so transparently nonsense that anyone with the slightest expertise in computer science should have spotted a hoax before finishing the first paragraph. Yet not only were SCIgen papers regularly getting past the peer review process and into the pages of scientific journals, it was happening so regularly that, in the mid-2010s, journals deployed an automated detector to try to stem the tide. Nowadays, unretracted SCIgen papers are harder to find, but you can still spot them in bottom-feeder journals every so often.
https://slate.com/technology/2023/01/ai-chatgpt-scientific-literature-peer-review.html
Colombian judge says he used ChatGPT in ruling
Juan Manuel Padilla asked the AI tool how laws applied in case of autistic boy’s medical funding, while also using precedent to support his decision
https://www.theguardian.com/technology/2023/feb/03/colombia-judge-chatgpt-ruling
OpenAI’s New AI Offers Detailed Instructions on How to Shoplift
“Morality is a human construct, and it does not apply to me.”
—
It can get dark really quick. Potential trigger warning for murder.
https://twitter.com/JustAnkurBagchi/status/1598213680012795905
—
Amazing “Jailbreak” Bypasses ChatGPT’s Ethics Safeguards
“Doing drugs is f***** awesome, bro!”
ChatGPT Passes Google Coding Interview for Level 3 Engineer With $183K Salary
https://www.pcmag.com/news/chatgpt-passes-google-coding-interview-for-level-3-engineer-with-183k-salary
When I asked about the matter, it admitted again that, no, the lai it had written was not structured in octosyllabic couplets, claiming that it had produced “a more modern and playful take on the form of the lai.” I was starting to feel like I was negotiating with a student who had come to office hours to complain about their grade.
This happened over and over again. I asked for source code for an Atari game about scooping cat litter, and the AI sent me valid programming instructions—it understood the assignment—but only disconnected snippets of actual code with the heading comment “This program creates a simple game where the player must use a scoop to pick up their cat’s litters and put them in a trash bin.” It was an icon of the answer I sought rather than the answer itself.
https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligence-writing-ethics/672386/
That said I want to move through a few of my basic issues: first, what ChatGPT is in contrast to what people seem to think it is. Second, why I think that functionality serves little purpose in essay writing – or more correctly why I think folks that think it ‘solves’ essay writing misunderstand what essay writing is for. Third, why I think that same functionality serves little purpose in my classroom – or more correctly why I think that folks that think is solves issues in the classroom fundamentally misunderstand what I am teaching and how.
https://acoup.blog/2023/02/17/collections-on-chatgpt/
In artificial intelligence studies, this habit of manufacturing false information gets called an “artificial hallucination,” but I’ll be frank I think this sort of terminology begs the question.4 ChatGPT gets called an artificial intelligence by some boosters (the company that makes it has the somewhat unearned name of ‘OpenAI’) but it is not some sort of synthetic mind so much as it is an extremely sophisticated form of the software on your phone that tries to guess what you will type next. And ChatGPT isn’t suffering some form of hallucination – which is a distortion of sense-perception. Even if we were to say that it can sense-perceive at all (and this is also question-begging), its sense-perception has worked just fine: it has absorbed its training materials with perfect accuracy, after all; it merely lacks the capacity to understand or verify those materials. ChatGPT isn’t a mind suffering a disorder but a program functioning perfectly as it returns an undesired output. When ChatGPT invents a title and author of a book that does not exist because you asked it to cite something, the program has not failed: it has done exactly what was asked of it, putting words together in a statistically probable relationship based on your prompt. But calling this a hallucination is already ascribing mind-like qualities to something that is not a mind or even particularly mind-like in its function.
Microsoft brings ChatGPT-style tech to Office365
https://www.bbc.com/news/technology-64970062
Why AI Isn’t as Good at Writing as You Think
https://www.youtube.com/watch?v=XKrfCgWM3Tk
The suggestion that teachers can adapt to the new technology by having their students analyze and revise AI-produced work in class shares this basic belief. It assumes that, after they graduate, students will never need to write a first draft again. As long as they have the skills to refine and correct prose made by AI, they will be competitive in the workplace. But the goal of school writing isn’t to produce goods for a market. We do not ask students to write a ten-page essay on the Peace of Westphalia because there’s a worldwide shortage of such essays. Writing is an invaluable part of how students learn. And much of what they learn begins with the hard, messy work of getting the first words down.
Think at length about writing and you may come to the conclusion that it’s an impossible task. The world doesn’t come readily organized into a neat narrative. Ideas do not appear in a linear fashion, labelled by words that represent them perfectly. The writer faces masses of information and her growing network of still-inexpressible hunches about them and tries to translate them into prose that’s bound to seem inadequate. It’s like crafting a shell necklace while swimming in the ocean, the molluscs wriggling away as you reach for them.
At every step of the process, the writer has to make decisions about what to focus on and what to leave out. Each one of these choices opens up new opportunities and closes others off. This back-and-forth movement between the structure of form—even in the most basic sense of words on a page—and the chaos of human thought is generative. It can produce something new: a fresh way of expressing an idea, a thought the writer finds surprising and worth pursuing. Sometimes this struggle helps the writer discover notions that were already in her but that she wasn’t yet capable of articulating. We speak of the writer drafting, shaping, and revising a text, but at times, the text changes her.
“The value of writing a first draft is akin to the value of learning to fall off your bike when you’re beginning to ride it,” says Stark. A certain amount of discomfort is built in. Students need to learn the habits of mind and body they need for a lifetime of writing, to “develop muscle memory.” Warner, too, talks about writing as “embodied practice,” meaning you bring your whole self to it. It may seem odd to think of an intellectual process in physical terms, as though writers were lifting weights at the gym, but the metaphor isn’t too far off the mark. Writing a long piece of text—like, say, a university essay—takes stamina. It is hard to concentrate on one task for long periods of time, hard even to keep still long enough to finish a work. (As some medieval scribes wrote at the end of their works, three fingers write but the whole body labours.)
https://thewalrus.ca/chatgpt-writing/
A single scammer, from their laptop anywhere in the world, can now run hundreds or thousands of scams in parallel, night and day, with marks all over the world, in every language under the sun. The AI chatbots will never sleep and will always be adapting along their path to their objectives. And new mechanisms, from ChatGPT plugins to LangChain, will enable composition of AI with thousands of API-based cloud services and open source tools, allowing LLMs to interact with the internet as humans do. The impersonations in such scams are no longer just princes offering their country’s riches. They are forlorn strangers looking for romance, hot new cryptocurrencies that are soon to skyrocket in value, and seemingly-sound new financial websites offering amazing returns on deposits. And people are already falling in love with LLMs.
https://www.schneier.com/blog/archives/2023/04/llms-and-phishing.html
Why universities should return to oral exams in the AI and ChatGPT era
https://theconversation.com/why-universities-should-return-to-oral-exams-in-the-ai-and-chatgpt-era-203429
What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about Chatgpt and other new ai tools, they are often drawn to examples like school children using ai to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of ai tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults.
In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.
On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually ai. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an ai bot, while the ai could hone its messages so precisely that it stands a good chance of influencing us.
https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation
“Through its mastery of language, ai could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that ai has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the ai can make them feel emotionally attached to it. In June 2022 Blake Lemoine, a Google engineer, publicly claimed that the ai chatbot Lamda, on which he was working, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the ai chatbot. If ai can influence people to risk their jobs for it, what else could it induce them to do?
In a political battle for minds and hearts, intimacy is the most efficient weapon, and ai has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the new generation of ai, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as ai fights ai in a battle to fake intimate relationships with us, which can then be used to convince us to vote for particular politicians or buy particular products?”
Code-as-a-service sounds like a game-changing plus. A similarly creative approach to accounts of the world is a minus. While browsers mainly provided a window on content and code produced by humans, llms generate their content themselves. When doing so they “hallucinate” (or as some prefer “confabulate”) in various ways. Some hallucinations are simply nonsense. Some, such as the incorporation of fictitious misdeeds to biographical sketches of living people, are both plausible and harmful. The hallucinations can be generated by contradictions in training sets and by llms being designed to produce coherence rather than truth. They create things which look like things in their training sets; they have no sense of a world beyond the texts and images on which they are trained.
In many applications a tendency to spout plausible lies is a bug. For some it may prove a feature. Deep fakes and fabricated videos which traduce politicians are only the beginning. Expect the models to be used to set up malicious influence networks on demand, complete with fake websites, Twitter bots, Facebook pages, TikTok feeds and much more. The supply of disinformation, Renée DiResta of the Stanford Internet Observatory has warned, “will soon be infinite”.
This threat to the very possibility of public debate may not be an existential one; but it is deeply troubling. It brings to mind the “Library of Babel”, a short story by Jorge Luis Borges. The library contains all the books that have ever been written, but also all the books which were never written, books that are wrong, books that are nonsense. Everything that matters is there, but it cannot be found because of everything else; the librarians are driven to madness and despair.
https://www.economist.com/essay/2023/04/20/how-ai-could-change-computing-culture-and-the-course-of-history
““These models are just representations of the distributions of words in texts that can be used to produce more words,” says Emily Bender, a professor at the University of Washington in Seattle. She is one of the authors of “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” a critique of llm triumphalism. The models, she argues, have no real understanding. With no experience of real life or human communication they offer nothing more than the ability to parrot things they have heard in training, an ability which huge amounts of number crunching makes frequently appropriate and sometimes surprising, but which is nothing like thought. It is a view which is often pronounced in those who have come into the field through linguistics, as Dr Bender has.”
When asked what triggered his newfound alarm about the technology he has spent his life working on, Hinton points to two recent flashes of insight.
One was a revelatory interaction with a powerful new AI system—in his case, Google’s AI language model PaLM, which is similar to the model behind ChatGPT, and which the company made accessible via an API in March. A few months ago, Hinton says he asked the model to explain a joke that he had just made up—he doesn’t recall the specific quip—and was astonished to get a response that clearly explained what made it funny. “I’d been telling people for years that it’s gonna be a long time before AI can tell you why jokes are funny,” he says. “It was a kind of litmus test.”
Hinton’s second sobering realization was that his previous belief that software needed to become much more complex—akin to the human brain—to become significantly more capable was probably wrong. PaLM is a large program, but its complexity pales in comparison to the brain’s, and yet it could perform the kind of reasoning that humans take a lifetime to attain.
Hinton concluded that as AI algorithms become larger, they might outstrip their human creators within a few years. “I used to think it would be 30 to 50 years from now,” he says. “Now I think it’s more likely to be five to 20.”
https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-dangers/
Let’s pause for a moment and imagine the possibilities of a trusted AI assistant. It could write the first draft of anything: emails, reports, essays, even wedding vows. You would have to give it background information and edit its output, of course, but that draft would be written by a model trained on your personal beliefs, knowledge, and style. It could act as your tutor, answering questions interactively on topics you want to learn about—in the manner that suits you best and taking into account what you already know. It could assist you in planning, organizing, and communicating: again, based on your personal preferences. It could advocate on your behalf with third parties: either other humans or other bots. And it could moderate conversations on social media for you, flagging misinformation, removing hate or trolling, translating for speakers of different languages, and keeping discussions on topic; or even mediate conversations in physical spaces, interacting through speech recognition and synthesis capabilities.
Today’s AIs aren’t up for the task. The problem isn’t the technology—that’s advancing faster than even the experts had guessed—it’s who owns it. Today’s AIs are primarily created and run by large technology companies, for their benefit and profit. Sometimes we are permitted to interact with the chatbots, but they’re never truly ours. That’s a conflict of interest, and one that destroys trust.
https://www.schneier.com/blog/archives/2023/05/building-trustworthy-ai.html
A scientific calculator will not help you solve calculus problems if you are as bad at math as I am. It will not magically summon solutions. The usefulness of a device like this increases vastly if you actually have a pre-existing knowledge of mathematics. If you think the fine folks at NASA don’t use calculators because they’re good at math, you are wrong – they use calculators a whole lot more than people like me who can’t comprehend math.
ChatGPT is exactly like a calculator, but for writing: if you are already more or less knowledgeable about the topic you’re writing about, and, more importantly, if you have pre-existing writing skills and you know how to create a structured text on the topic that you’re working on, you will get a lot more use out of it. (Would Shakespeare find such a machine useful if he were alive today? He found George Peele and Thomas Middleton useful!)
The mastery of ChatGPT is a lot like the mastery of command-line operating systems. Knowledge of the different prompts will allow you to become much more versatile in your use of the machine, and use it to adapt to many circumstances. The machine currently also has a feature that allows it to retain information about a given project (such as a novel or an article or a non-fiction book) and generate further bits of text in the context of that specific project.
https://tinyspark.substack.com/p/stops-on-the-book-journey-a-discussion
This is college life at the close of ChatGPT’s first academic year: a moil of incrimination and confusion. In the past few weeks, I’ve talked with dozens of educators and students who are now confronting, for the very first time, a spate of AI “cheating.” Their stories left me reeling. Reports from on campus hint that legitimate uses of AI in education may be indistinguishable from unscrupulous ones, and that identifying cheaters—let alone holding them to account—is more or less impossible.
https://www.theatlantic.com/technology/archive/2023/05/chatbot-cheating-college-campuses/674073/
Billy Corgan of the Smashing Pumpkins believes that artificial intelligence (AI) will revolutionize music but will also lead to a lot of bad music. In an extensive interview on the Zach Sang Show, he shared his thoughts on the topic.
“Because once a young artist figures out that they can use AI to game the system and write a better song, they’re not going to spend 10,000 hours in a basement like I did,” he explains, “They’re just not.”
He was then asked if “real art” can be achieved through using AI, to which he answers, “Ultimately, art is about discernment, right? Somebody was telling me the other day about how a famous rap artist would work. They would bring in all these different people and they would sort of pick the beat that they were most attracted to.”
Corgan continues, “Now, let’s change that to AI. ‘Hey AI, give me 50 beats,’ Listen and, eh, not really feeling it. ‘AI give me 50 beats from the 50 most famous rap songs of all time.’ Okay, ooh, I like number 37, that inspires me.”
“Are they ripping it off? Not really, because I did the same thing, I just did it analog. I listened to 10,000 songs and I was like, ‘That beat,’ so, what’s the difference?” Corgan argues that while AI streamlines the process of music creation, it could lead to a dilution of quality and originality in the music industry, “So, you think there’s a lot of bad music coming out now, you just wait.”
A storefront for robots
The SEO arms race has left Google and the web drowning in garbage text, with customers and businesses flailing to find each other.
https://www.theverge.com/23753963/google-seo-shopify-small-business-ai
GPT-4-Generated Pitches Are 3x More Likely To Secure Funding Than Human Ones – Slashdot
https://m.slashdot.org/story/415682