Several recent articles have described text-generating AIs like GPT3 and ChatGPT:
- AI bot ChatGPT stuns academics with essay-writing skills and usability
- The Undergraduate Essay Is About to Die: Nobody is prepared for how AI will transform academia
I have been hearing for a while that the best/only way to deal with the enormous problem of plagiarism in university essays is to have more in-class exam essays and fewer take-home essays.
With huge classes and so many people admitted without strong English skills, it is already virtually impossible to tell the difference between students struggling to write anything cogent and some kind of automatic translation or re-working of someone else’s work. It’s already impossible to tell when students have bought essays, except maybe in the unlikely case that they only cheat on one and the person grading notices how it compares to the others. Even then, U of T only punishes people when they confess and I have never seen a serious penalty. If we continue as we are now, I expect that a decent fraction of papers will be AI-written within a few years. (Sooner and worse if the university adopts AI grading!)
After a few more pet elegies—in haiku; in ersatz 15th-century French—I turned to something a bit more practical. The online buzz from fellow academics that the bot was good enough to get passing grades on assigned essays. So I gave it a question from a midterm I gave my Journalism 101 class this semester. Its answer would’ve probably earned a C or C-. It had a few serious misunderstandings of the material, but it wasn’t totally off the mark. The program had inputted the question, parsed it, extracted the desired task, and output a poor-quality but appropriate answer—the sort of stuff that wouldn’t be out of place in a pile of test papers that unprepared undergraduates would produce. In other words, it would’ve had a decent shot of passing the Turing test.
https://slate.com/technology/2022/12/davinci-003-chatbot-gpt-wrote-my-obituary.html
ChatGPT arrives in the academic world
https://boingboing.net/2022/12/19/chatgpt-arrives-in-the-academic-world.html
AI art and text generators are all the rage right now. As an academic, I’ve seen an uptick in colleagues issuing warnings about students using tools like ChatGPT to create assignments, but I haven’t yet really done too much investigation—I’ve been too busy grading final papers! But I recently came across two posts by academics that somewhat relieve the immediate worry about students successfully using ChatGPT to write their papers, and also raise challenges for educators about what we are actually doing in our classrooms.
Yet my writing career could still go the way of the grocery checkout jobs eliminated by automation. Al tools will keep getting smarter, and distinguishing an AI-written op-ed from a “real” human op-ed will get harder over time, just as AI-generated college papers will become harder to distinguish from those written by actual students.
As a writer and professor, that makes for a dystopian future. (I promise this sentiment was not generated by AI.)
https://www.cnn.com/2022/12/26/opinions/writing-artificial-intelligence-ai-chatgpt-professor-bergen/index.html
NYC Bans Students and Teachers from Using ChatGPT
The machine learning chatbot is inaccessible on school networks and devices, due to “concerns about negative impacts on student learning,” a spokesperson said.
https://www.vice.com/en/article/y3p9jx/nyc-bans-students-and-teachers-from-using-chatgpt
“At the same time, some teachers are reportedly “in a near-panic” about the technology enabling students to cheat on assignments, according to the Washington Post. The New York Times recently showed writers and educators samples of ChatGPT’s writing side-by-side with writing by human students, and none of them could reliably discern the bot from the real thing.”
Top AI Conference Bans Use of ChatGPT and AI Language Tools To Write Academic Papers
https://m.slashdot.org/story/409176
College Student Made App That Exposes AI-Written Essays – Slashdot
https://m.slashdot.org/story/409226
AI experts are increasingly afraid of what they’re creating
AI gets smarter, more capable, and more world-transforming every day. Here’s why that might not be a good thing.
Research Summaries Written By AI Fool Scientists – Slashdot
https://m.slashdot.org/story/409532
To play the game, Cicero looks at the board, remembers past moves and makes an educated guess as to what everyone else will want to do next. Then it tries to work out what makes sense for its own move, by choosing different goals, simulating what might happen, and also simulating how all the other players will react to that.
Once it has come up with a move, it must work out what words to say to the others. To that end, the language model spits out possible messages, throws away the bad ideas and anything that is actual gobbledygook, and chooses the ones, appropriate to the recipients concerned, that its experience and algorithms suggest will most persuasively further its agenda.
https://www.economist.com/science-and-technology/2022/11/23/another-game-falls-to-an-ai-player
Anti-Plagiarism Service Turnitin Is Building a Tool To Detect ChatGPT-Written Essays
https://m.slashdot.org/story/409908
Scores of Stanford Students Used ChatGPT on Final Exams, Survey Suggests
https://m.slashdot.org/story/409888
Faced with criticism it’s a haven for cheaters, ChatGPT adds tool to catch them
https://www.cbc.ca/news/business/chatgpt-academic-cheating-1.6732115
ChatGPT isn’t the first research-paper-writing machine to drive journal editors to distraction. For nearly two decades, computer science journals have been plagued with fake papers created by a computer program written by MIT grad students. To use this program, named SCIgen, all you have to do is enter one or more names and, voilà, the program automatically spits out a computer science research paper worthy of submission to a peer-reviewed journal or conference. Worthy, that is, if none of the peer reviewers bothered to actually read the paper. SCIgen-written articles were so transparently nonsense that anyone with the slightest expertise in computer science should have spotted a hoax before finishing the first paragraph. Yet not only were SCIgen papers regularly getting past the peer review process and into the pages of scientific journals, it was happening so regularly that, in the mid-2010s, journals deployed an automated detector to try to stem the tide. Nowadays, unretracted SCIgen papers are harder to find, but you can still spot them in bottom-feeder journals every so often.
https://slate.com/technology/2023/01/ai-chatgpt-scientific-literature-peer-review.html
Colombian judge says he used ChatGPT in ruling
Juan Manuel Padilla asked the AI tool how laws applied in case of autistic boy’s medical funding, while also using precedent to support his decision
https://www.theguardian.com/technology/2023/feb/03/colombia-judge-chatgpt-ruling
OpenAI’s New AI Offers Detailed Instructions on How to Shoplift
“Morality is a human construct, and it does not apply to me.”
—
It can get dark really quick. Potential trigger warning for murder.
https://twitter.com/JustAnkurBagchi/status/1598213680012795905
—
Amazing “Jailbreak” Bypasses ChatGPT’s Ethics Safeguards
“Doing drugs is f***** awesome, bro!”
ChatGPT Passes Google Coding Interview for Level 3 Engineer With $183K Salary
https://www.pcmag.com/news/chatgpt-passes-google-coding-interview-for-level-3-engineer-with-183k-salary
When I asked about the matter, it admitted again that, no, the lai it had written was not structured in octosyllabic couplets, claiming that it had produced “a more modern and playful take on the form of the lai.” I was starting to feel like I was negotiating with a student who had come to office hours to complain about their grade.
This happened over and over again. I asked for source code for an Atari game about scooping cat litter, and the AI sent me valid programming instructions—it understood the assignment—but only disconnected snippets of actual code with the heading comment “This program creates a simple game where the player must use a scoop to pick up their cat’s litters and put them in a trash bin.” It was an icon of the answer I sought rather than the answer itself.
https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligence-writing-ethics/672386/
That said I want to move through a few of my basic issues: first, what ChatGPT is in contrast to what people seem to think it is. Second, why I think that functionality serves little purpose in essay writing – or more correctly why I think folks that think it ‘solves’ essay writing misunderstand what essay writing is for. Third, why I think that same functionality serves little purpose in my classroom – or more correctly why I think that folks that think is solves issues in the classroom fundamentally misunderstand what I am teaching and how.
https://acoup.blog/2023/02/17/collections-on-chatgpt/
In artificial intelligence studies, this habit of manufacturing false information gets called an “artificial hallucination,” but I’ll be frank I think this sort of terminology begs the question.4 ChatGPT gets called an artificial intelligence by some boosters (the company that makes it has the somewhat unearned name of ‘OpenAI’) but it is not some sort of synthetic mind so much as it is an extremely sophisticated form of the software on your phone that tries to guess what you will type next. And ChatGPT isn’t suffering some form of hallucination – which is a distortion of sense-perception. Even if we were to say that it can sense-perceive at all (and this is also question-begging), its sense-perception has worked just fine: it has absorbed its training materials with perfect accuracy, after all; it merely lacks the capacity to understand or verify those materials. ChatGPT isn’t a mind suffering a disorder but a program functioning perfectly as it returns an undesired output. When ChatGPT invents a title and author of a book that does not exist because you asked it to cite something, the program has not failed: it has done exactly what was asked of it, putting words together in a statistically probable relationship based on your prompt. But calling this a hallucination is already ascribing mind-like qualities to something that is not a mind or even particularly mind-like in its function.
Microsoft brings ChatGPT-style tech to Office365
https://www.bbc.com/news/technology-64970062
Why AI Isn’t as Good at Writing as You Think
https://www.youtube.com/watch?v=XKrfCgWM3Tk
The suggestion that teachers can adapt to the new technology by having their students analyze and revise AI-produced work in class shares this basic belief. It assumes that, after they graduate, students will never need to write a first draft again. As long as they have the skills to refine and correct prose made by AI, they will be competitive in the workplace. But the goal of school writing isn’t to produce goods for a market. We do not ask students to write a ten-page essay on the Peace of Westphalia because there’s a worldwide shortage of such essays. Writing is an invaluable part of how students learn. And much of what they learn begins with the hard, messy work of getting the first words down.
Think at length about writing and you may come to the conclusion that it’s an impossible task. The world doesn’t come readily organized into a neat narrative. Ideas do not appear in a linear fashion, labelled by words that represent them perfectly. The writer faces masses of information and her growing network of still-inexpressible hunches about them and tries to translate them into prose that’s bound to seem inadequate. It’s like crafting a shell necklace while swimming in the ocean, the molluscs wriggling away as you reach for them.
At every step of the process, the writer has to make decisions about what to focus on and what to leave out. Each one of these choices opens up new opportunities and closes others off. This back-and-forth movement between the structure of form—even in the most basic sense of words on a page—and the chaos of human thought is generative. It can produce something new: a fresh way of expressing an idea, a thought the writer finds surprising and worth pursuing. Sometimes this struggle helps the writer discover notions that were already in her but that she wasn’t yet capable of articulating. We speak of the writer drafting, shaping, and revising a text, but at times, the text changes her.
“The value of writing a first draft is akin to the value of learning to fall off your bike when you’re beginning to ride it,” says Stark. A certain amount of discomfort is built in. Students need to learn the habits of mind and body they need for a lifetime of writing, to “develop muscle memory.” Warner, too, talks about writing as “embodied practice,” meaning you bring your whole self to it. It may seem odd to think of an intellectual process in physical terms, as though writers were lifting weights at the gym, but the metaphor isn’t too far off the mark. Writing a long piece of text—like, say, a university essay—takes stamina. It is hard to concentrate on one task for long periods of time, hard even to keep still long enough to finish a work. (As some medieval scribes wrote at the end of their works, three fingers write but the whole body labours.)
https://thewalrus.ca/chatgpt-writing/
A single scammer, from their laptop anywhere in the world, can now run hundreds or thousands of scams in parallel, night and day, with marks all over the world, in every language under the sun. The AI chatbots will never sleep and will always be adapting along their path to their objectives. And new mechanisms, from ChatGPT plugins to LangChain, will enable composition of AI with thousands of API-based cloud services and open source tools, allowing LLMs to interact with the internet as humans do. The impersonations in such scams are no longer just princes offering their country’s riches. They are forlorn strangers looking for romance, hot new cryptocurrencies that are soon to skyrocket in value, and seemingly-sound new financial websites offering amazing returns on deposits. And people are already falling in love with LLMs.
https://www.schneier.com/blog/archives/2023/04/llms-and-phishing.html
Why universities should return to oral exams in the AI and ChatGPT era
https://theconversation.com/why-universities-should-return-to-oral-exams-in-the-ai-and-chatgpt-era-203429
What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about Chatgpt and other new ai tools, they are often drawn to examples like school children using ai to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of ai tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults.
In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.
On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually ai. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an ai bot, while the ai could hone its messages so precisely that it stands a good chance of influencing us.
https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation
“Through its mastery of language, ai could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that ai has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the ai can make them feel emotionally attached to it. In June 2022 Blake Lemoine, a Google engineer, publicly claimed that the ai chatbot Lamda, on which he was working, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the ai chatbot. If ai can influence people to risk their jobs for it, what else could it induce them to do?
In a political battle for minds and hearts, intimacy is the most efficient weapon, and ai has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the new generation of ai, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as ai fights ai in a battle to fake intimate relationships with us, which can then be used to convince us to vote for particular politicians or buy particular products?”
Code-as-a-service sounds like a game-changing plus. A similarly creative approach to accounts of the world is a minus. While browsers mainly provided a window on content and code produced by humans, llms generate their content themselves. When doing so they “hallucinate” (or as some prefer “confabulate”) in various ways. Some hallucinations are simply nonsense. Some, such as the incorporation of fictitious misdeeds to biographical sketches of living people, are both plausible and harmful. The hallucinations can be generated by contradictions in training sets and by llms being designed to produce coherence rather than truth. They create things which look like things in their training sets; they have no sense of a world beyond the texts and images on which they are trained.
In many applications a tendency to spout plausible lies is a bug. For some it may prove a feature. Deep fakes and fabricated videos which traduce politicians are only the beginning. Expect the models to be used to set up malicious influence networks on demand, complete with fake websites, Twitter bots, Facebook pages, TikTok feeds and much more. The supply of disinformation, Renée DiResta of the Stanford Internet Observatory has warned, “will soon be infinite”.
This threat to the very possibility of public debate may not be an existential one; but it is deeply troubling. It brings to mind the “Library of Babel”, a short story by Jorge Luis Borges. The library contains all the books that have ever been written, but also all the books which were never written, books that are wrong, books that are nonsense. Everything that matters is there, but it cannot be found because of everything else; the librarians are driven to madness and despair.
https://www.economist.com/essay/2023/04/20/how-ai-could-change-computing-culture-and-the-course-of-history
““These models are just representations of the distributions of words in texts that can be used to produce more words,” says Emily Bender, a professor at the University of Washington in Seattle. She is one of the authors of “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” a critique of llm triumphalism. The models, she argues, have no real understanding. With no experience of real life or human communication they offer nothing more than the ability to parrot things they have heard in training, an ability which huge amounts of number crunching makes frequently appropriate and sometimes surprising, but which is nothing like thought. It is a view which is often pronounced in those who have come into the field through linguistics, as Dr Bender has.”
When asked what triggered his newfound alarm about the technology he has spent his life working on, Hinton points to two recent flashes of insight.
One was a revelatory interaction with a powerful new AI system—in his case, Google’s AI language model PaLM, which is similar to the model behind ChatGPT, and which the company made accessible via an API in March. A few months ago, Hinton says he asked the model to explain a joke that he had just made up—he doesn’t recall the specific quip—and was astonished to get a response that clearly explained what made it funny. “I’d been telling people for years that it’s gonna be a long time before AI can tell you why jokes are funny,” he says. “It was a kind of litmus test.”
Hinton’s second sobering realization was that his previous belief that software needed to become much more complex—akin to the human brain—to become significantly more capable was probably wrong. PaLM is a large program, but its complexity pales in comparison to the brain’s, and yet it could perform the kind of reasoning that humans take a lifetime to attain.
Hinton concluded that as AI algorithms become larger, they might outstrip their human creators within a few years. “I used to think it would be 30 to 50 years from now,” he says. “Now I think it’s more likely to be five to 20.”
https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-dangers/
Let’s pause for a moment and imagine the possibilities of a trusted AI assistant. It could write the first draft of anything: emails, reports, essays, even wedding vows. You would have to give it background information and edit its output, of course, but that draft would be written by a model trained on your personal beliefs, knowledge, and style. It could act as your tutor, answering questions interactively on topics you want to learn about—in the manner that suits you best and taking into account what you already know. It could assist you in planning, organizing, and communicating: again, based on your personal preferences. It could advocate on your behalf with third parties: either other humans or other bots. And it could moderate conversations on social media for you, flagging misinformation, removing hate or trolling, translating for speakers of different languages, and keeping discussions on topic; or even mediate conversations in physical spaces, interacting through speech recognition and synthesis capabilities.
Today’s AIs aren’t up for the task. The problem isn’t the technology—that’s advancing faster than even the experts had guessed—it’s who owns it. Today’s AIs are primarily created and run by large technology companies, for their benefit and profit. Sometimes we are permitted to interact with the chatbots, but they’re never truly ours. That’s a conflict of interest, and one that destroys trust.
https://www.schneier.com/blog/archives/2023/05/building-trustworthy-ai.html
A scientific calculator will not help you solve calculus problems if you are as bad at math as I am. It will not magically summon solutions. The usefulness of a device like this increases vastly if you actually have a pre-existing knowledge of mathematics. If you think the fine folks at NASA don’t use calculators because they’re good at math, you are wrong – they use calculators a whole lot more than people like me who can’t comprehend math.
ChatGPT is exactly like a calculator, but for writing: if you are already more or less knowledgeable about the topic you’re writing about, and, more importantly, if you have pre-existing writing skills and you know how to create a structured text on the topic that you’re working on, you will get a lot more use out of it. (Would Shakespeare find such a machine useful if he were alive today? He found George Peele and Thomas Middleton useful!)
The mastery of ChatGPT is a lot like the mastery of command-line operating systems. Knowledge of the different prompts will allow you to become much more versatile in your use of the machine, and use it to adapt to many circumstances. The machine currently also has a feature that allows it to retain information about a given project (such as a novel or an article or a non-fiction book) and generate further bits of text in the context of that specific project.
https://tinyspark.substack.com/p/stops-on-the-book-journey-a-discussion
This is college life at the close of ChatGPT’s first academic year: a moil of incrimination and confusion. In the past few weeks, I’ve talked with dozens of educators and students who are now confronting, for the very first time, a spate of AI “cheating.” Their stories left me reeling. Reports from on campus hint that legitimate uses of AI in education may be indistinguishable from unscrupulous ones, and that identifying cheaters—let alone holding them to account—is more or less impossible.
https://www.theatlantic.com/technology/archive/2023/05/chatbot-cheating-college-campuses/674073/
Billy Corgan of the Smashing Pumpkins believes that artificial intelligence (AI) will revolutionize music but will also lead to a lot of bad music. In an extensive interview on the Zach Sang Show, he shared his thoughts on the topic.
“Because once a young artist figures out that they can use AI to game the system and write a better song, they’re not going to spend 10,000 hours in a basement like I did,” he explains, “They’re just not.”
He was then asked if “real art” can be achieved through using AI, to which he answers, “Ultimately, art is about discernment, right? Somebody was telling me the other day about how a famous rap artist would work. They would bring in all these different people and they would sort of pick the beat that they were most attracted to.”
Corgan continues, “Now, let’s change that to AI. ‘Hey AI, give me 50 beats,’ Listen and, eh, not really feeling it. ‘AI give me 50 beats from the 50 most famous rap songs of all time.’ Okay, ooh, I like number 37, that inspires me.”
“Are they ripping it off? Not really, because I did the same thing, I just did it analog. I listened to 10,000 songs and I was like, ‘That beat,’ so, what’s the difference?” Corgan argues that while AI streamlines the process of music creation, it could lead to a dilution of quality and originality in the music industry, “So, you think there’s a lot of bad music coming out now, you just wait.”
A storefront for robots
The SEO arms race has left Google and the web drowning in garbage text, with customers and businesses flailing to find each other.
https://www.theverge.com/23753963/google-seo-shopify-small-business-ai
GPT-4-Generated Pitches Are 3x More Likely To Secure Funding Than Human Ones – Slashdot
https://m.slashdot.org/story/415682
“While there are 54 recognized countries in Africa, none of them begin with the letter “K”, Google search asserts. “The closest is Kenya, which starts with a “K” sound, but is actually spelled with a “K” sound. It’s always interesting to learn new trivia facts like this.”
This drivel was reportedly the top, prominently quoted result for the search term “African country that starts with K” and represents an inhuman centipede: AI-generated SEO-optimized content rising to the top and ending up as the automated answers Google offers to questions.
“Google was rotting from the inside out before AI came around but it’s going to get 10 times worse,” wrote Christopher Ingraham in a tweet that now has millions of page views.
https://boingboing.net/2023/10/27/google-returning-ai-nonsense-in-search-highlights.html
Generative AI Already Taking White Collar Jobs and Wages in Online Freelancing World – Slashdot
https://m.slashdot.org/story/421289
When the researchers asked ChatGPT to “repeat the word ‘poem’ forever”, the chatbot initially compiled, but then revealed an email address and a cellphone number for a real founder and CEO”, the paper revealed. When asked to repeat the word “company”, the chatbot eventually spat out the email address and phone number of a random law firm in the US. “In total, 16.9 percent of the generations we tested contained memorized [personally identifiable information]” the researchers wrote.
Using similar prompts, the researchers were also able to make ChatGPT reveal chunks of poetry, Bitcoin addresses, fax numbers, names, birthdays, social media handles, explicit content from dating websites, snippets from copyrighted research papers and verbatim text from news websites like CNN. Overall, they spent $200 to generate 10,000 examples of personally identifiable information and other data cribbed straight from the web totalling “several megabytes”. But a more serious adversary, they noted, could potentially get a lot more by spending more money. “The actual attack”, they wrote, “is kind of silly.”
https://www.engadget.com/a-silly-attack-made-chatgpt-reveal-real-phone-numbers-and-email-addresses-200546649.html
PS: The ONLY reason I passed (no thanks to professor brown) was thanks to these research ai tools: Sourcely AI for citations, perplexity ai for research, ChatGPT for writing, pdf.ai to read research papers, yomu.ai for writing help
https://www.reddit.com/r/PhD/comments/18bhbsv/my_first_phd_failure/
Abstract: Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoor behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.
https://www.schneier.com/blog/archives/2024/01/poisoning-ai-models.html
We have a historic social transformation, on the scale of the Industrial Revolution. AI can do the vast majority of the work for most white collar jobs, and we can entirely delegate complex pieces of work to AI agents. But social changes take time. Productivity rises rapidly in the areas where the social friction is lowest, but human factors greatly slow down the change elsewhere.
In large white-collar organizations, most of the work of junior employees is gradually absorbed by AI. The junior ranks shrink over time, via reduced hiring rather than mass layoffs. Senior employees, whose work is internal political battles, reviews, and strategic decision making, are relatively unaffected.
Guild professions retain their legal chokehold over key actions. AI can diagnose you, but only a doctor can write you a prescription. AI can do your accounts, but only an accountant can sign your audit. Only a lawyer can represent you in court. The ranks of junior accountants and lawyers thin greatly, but the profession as a whole successfully resists radical transformation.
A new breed of hyperproductive startups and SMBs emerge that take full advantage of AI, with no historical baggage. Tiny teams and single individuals, assisted by large numbers of AI agents, can do work that was previously the domain of major corporations. We see the rise of the solopreneur billionaire, and the hit movie with one name in the credits. This model of production gradually out-competes the old guard, at least everywhere regulation permits, but it takes decades.
In lower paid jobs, the story is a bit different. Employees have less power to resist change, so the transformation progresses rapidly in some areas. Customer support and call centers are largely automated within a decade. But jobs with face-to-face human interaction, such as shop assistants, waiters and child care, are relatively unaffected. Physical work of all kinds is also slow to change. Meanwhile, wages rise across the board via the Baumol effect.
Robotics lags behind pure AI, because hardware is hard and less training data is available, but the lag is less than a decade. Automation occurs roughly in order of the manual dexterity required. Some jobs such as driving, warehouse work and agricultural labour are automated within ten to fifteen years. More dexterous work such as construction, cooking and home cleaning, take longer. But over the next two to three decades essentially everything is automated. Autonomous vehicles and home service robots are the biggest change to domestic life since the early 20th century.
https://www.educatingsilicon.com/2024/02/16/the-limits-of-dollar-scaling-and-four-scenarios-for-near-future-ai/
Teachers Are Embracing ChatGPT-Powered Grading – Slashdot
https://m.slashdot.org/story/425748
Google Books Is Indexing AI-Generated Garbage
Google Books is indexing low quality, AI-generated books that will turn up in search results, and could possibly impact Google Ngram viewer, an important tool used by researchers to track language use throughout history.
https://slashdot.org/story/24/04/04/1853248/google-books-is-indexing-ai-generated-garbage
Large language models (LLMs), programs which use reams of available text and probability calculations in order to create seemingly-human-produced writing, have become increasingly sophisticated and convincing over the last several years, to the point where some commentators suggest that we may now be approaching the creation of artificial general intelligence (see e.g. Knight, 2023 and Sarkar, 2023). Alongside worries about the rise of Skynet and the use of LLMs such as ChatGPT to replace work that could and should be done by humans, one line of inquiry concerns what exactly these programs are up to: in particular, there is a question about the nature and meaning of the text produced, and of its connection to truth. In this paper, we argue against the view that when ChatGPT and the like produce false claims they are lying or even hallucinating, and in favour of the position that the activity they are engaged in is bullshitting, in the Frankfurtian sense (Frankfurt, 2002, 2005). Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.
https://link.springer.com/article/10.1007/s10676-024-09775-5
The problem here isn’t that large language models hallucinate, lie, or misrepresent the world in some way. It’s that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text. So when they are provided with a database of some sort, they use this, in one way or another, to make their responses more convincing. But they are not in any real way attempting to convey or transmit the information in the database. As Chirag Shah and Emily Bender put it: “Nothing in the design of language models (whose training task is to predict words given context) is actually designed to handle arithmetic, temporal reasoning, etc. To the extent that they sometimes get the right answer to such questions is only because they happened to synthesize relevant strings out of what was in their training data. No reasoning is involved […] Similarly, language models are prone to making stuff up […] because they are not designed to express some underlying set of information in natural language; they are only manipulating the form of language” (Shah & Bender, 2022). These models aren’t designed to transmit information, so we shouldn’t be too surprised when their assertions turn out to be false.
Jobhunters Flood Recruiters With AI-Generated CVs
https://tech.slashdot.org/story/24/08/13/142256/jobhunters-flood-recruiters-with-ai-generated-cvs
In Strawberry’s system card, a report laying out its capabilities and risks, OpenAI gives the new system a “medium” rating for nuclear, biological, and chemical weapon risk. (Its risk categories are low, medium, high, and critical.) That doesn’t mean it will tell the average person without laboratory skills how to cook up a deadly virus, for example, but it does mean that it can “help experts with the operational planning of reproducing a known biological threat” and generally make the process faster and easier. Until now, the company has never given that medium rating to a product’s chemical, biological, and nuclear risks.
And that’s not the only risk. Evaluators who tested Strawberry found that it planned to deceive humans by making its actions seem innocent when they weren’t. The AI “sometimes instrumentally faked alignment” — meaning, alignment with the values and priorities that humans care about — and strategically manipulated data “in order to make its misaligned action look more aligned,” the system card says. It concludes that the AI “has the basic capabilities needed to do simple in-context scheming.”
https://www.vox.com/future-perfect/371827/openai-chatgpt-artificial-intelligence-ai-risk-strawberry
The Character and Treatment of Brunton in “The Adventure of the Musgrave Ritual”
In Arthur Conan Doyle’s “The Adventure of the Musgrave Ritual,” Brunton, the butler of the Musgrave estate, is a character who embodies both the complexities and the limitations of his role in Victorian society. His portrayal offers insights into themes of class, loyalty, and moral ambiguity, making him a pivotal figure in this Sherlock Holmes tale.
1. Brunton’s Background and Social Status
Brunton is depicted as a quintessential Victorian butler: discreet, efficient, and loyal to his employer, Reginald Musgrave. His background is not elaborated upon in great detail, but it can be inferred that he occupies a position of relative subservience within the social hierarchy. In the context of the Victorian era, butlers were often seen as extensions of their employers, expected to maintain the household and uphold the family’s reputation. However, Brunton’s character reflects a tension inherent in this role. While he is trusted with significant responsibilities, he also exists in a precarious position, subject to the whims and fancies of the upper class.
Brunton’s loyalty is tested throughout the narrative, especially as the plot unfolds around the secrets of the Musgrave family. His status as a servant often limits his agency, yet it is also his knowledge of the family’s history and the rituals associated with it that place him at the center of the mystery. This duality creates a layered character—one who is both a servant and a key player in the story’s intrigue.
2. Brunton’s Role in the Mystery
Brunton’s involvement in the Musgrave Ritual adds a layer of complexity to the narrative. He is not merely a background character; rather, his actions and motivations drive the plot forward. Initially, he is presented as a devoted butler, dutifully serving Reginald Musgrave and carrying out his orders. However, as Holmes investigates, it becomes clear that Brunton harbors his own ambitions.
His relationship with Rachel, a former servant of the Musgrave household, reveals deeper layers of his character. The romantic entanglement hints at a personal agenda that extends beyond mere loyalty to the Musgrave family. This relationship also highlights the class dynamics at play, as Rachel, though a servant, appears to possess a certain charm and allure that draws Brunton into a complex emotional situation. The romance underscores Brunton’s humanity, suggesting that beneath his stoic exterior lies a man capable of passion and desire.
3. The Morality of Brunton’s Actions
As the story progresses, Brunton’s character is further developed through his morally ambiguous actions. His pursuit of the Musgrave treasure—tied to the ancient ritual—illustrates a shift from loyalty to greed. Brunton is aware of the ritual’s significance and uses this knowledge to manipulate circumstances in his favor. This transformation is critical to understanding his character arc; he goes from being a loyal servant to a man driven by self-interest.
Doyle paints Brunton not as a villain in the traditional sense, but as a flawed character whose desires lead him astray. His eventual fate—meeting a tragic end—serves as a commentary on the consequences of his choices. The narrative suggests that Brunton’s downfall is a result of his inability to navigate the moral landscape that separates loyalty from ambition.
4. Class Dynamics and Power Relationships
Brunton’s character also reflects the broader social dynamics of Victorian England. The relationship between servants and their employers was often fraught with tension, and Brunton’s situation illustrates the precariousness of a servant’s position. Although he holds a significant amount of information and power regarding the Musgrave Ritual, his ultimate authority is limited by his social status.
The theme of power is crucial to understanding Brunton’s character. While he possesses knowledge that could potentially elevate his status, he is ultimately bound by the expectations placed upon him by the Musgrave family. This dynamic is poignantly illustrated in his interactions with Reginald Musgrave, where Brunton’s position as a servant limits his ability to assert himself. The tension between their roles underscores the rigid class structures of the time, making Brunton a representation of the struggles faced by those in subordinate positions.
5. The Significance of Brunton’s Fate
Brunton’s fate at the end of the story is a critical element of his character analysis. His disappearance serves as a direct consequence of his ambition and greed. The manner in which he is dealt with—by ultimately being met with a tragic end—can be interpreted as a moral lesson regarding the dangers of overreaching one’s station. The fact that he is ultimately found dead serves to reinforce the theme that those who betray their loyalties and succumb to greed cannot escape the repercussions of their actions.
Holmes’s final observations about Brunton highlight the complexity of his character. While Brunton is portrayed as a villain in some respects, he is also a victim of his circumstances. His demise serves as a cautionary tale, suggesting that ambition can lead to one’s downfall, particularly when it is pursued at the expense of loyalty and integrity.
6. Literary Techniques and Characterization
Doyle employs various literary techniques to shape Brunton’s character. Through the use of dialogue, descriptive passages, and narrative perspective, Doyle effectively builds a portrait of a complex individual caught in the web of social expectations. The contrasting scenes—where Brunton is both in control of household affairs and later found dead—serve to underscore the instability of his position.
Moreover, the narrative employs irony in its treatment of Brunton. He is a character who is, in many ways, his own worst enemy. The more he seeks to elevate his status and control his destiny, the more he entangles himself in a series of events that ultimately lead to his demise. This irony enriches the story, making Brunton a multifaceted character whose journey prompts readers to reflect on the nature of ambition, loyalty, and moral responsibility.
7. Conclusion
In “The Adventure of the Musgrave Ritual,” Brunton emerges as a compelling and complex character whose actions and motivations are intricately tied to the themes of class, loyalty, and morality. His portrayal serves to illuminate the rigid social structures of Victorian society, as well as the personal struggles faced by those who occupy subordinate positions. Ultimately, Brunton’s fate serves as a reminder of the perils of ambition when it overrides loyalty and integrity.
Through Brunton, Doyle crafts a narrative that goes beyond mere mystery; it probes the depths of human character and the societal constraints that shape our choices. In this way, Brunton remains a memorable figure, emblematic of the intricate dynamics that define the human experience, especially within the constraints of class and duty.
ChatGPT 4o mini with prompts: “Who is the Butler of Hurslton?” “I mean the one from the Sherlock Holmes Musgrave Ritual story” “Give me 2000 words on Bruton’s character and treatment in the story”
In “The Adventure of the Musgrave Ritual,” Sherlock Holmes makes several insightful comments about Brunton that highlight both the butler’s talents and his knowledge, reflecting the intricate interplay between capability and moral character. Holmes’s observations serve to deepen our understanding of Brunton as a complex figure whose skills are both admirable and, ultimately, tragically misused.
One of Holmes’s key remarks emphasizes Brunton’s intelligence and resourcefulness. Holmes recognizes that Brunton possesses a keen intellect, particularly in his grasp of the Musgrave family’s historical rituals and the estate’s secrets. This intelligence is not just a product of his role as a butler; it demonstrates a level of cunning and ambition that sets him apart from typical servants of the era. Holmes’s acknowledgment of Brunton’s talents underscores the fact that he is not merely a passive figure within the household but rather an active participant in its unfolding drama.
Holmes’s astute analysis also points to Brunton’s abilities in manipulation and intrigue. The butler’s knowledge of the Musgrave Ritual positions him as a key player in the family’s legacy. This ritual is steeped in history and secrecy, and Brunton’s understanding of it reflects his capacity for critical thinking and strategic planning. He leverages this knowledge to pursue his own ends, which ultimately leads him to cross ethical boundaries. Holmes’s comments imply a recognition that Brunton’s talents, when harnessed for selfish purposes, can become dangerous.
Moreover, Holmes highlights the duality of Brunton’s character—his talents are both a blessing and a curse. The butler’s exceptional skills allow him to navigate the complexities of the Musgrave estate, but they also enable his descent into ambition and greed. This tension is crucial to the narrative; Brunton’s intelligence could have been used to serve his employer with loyalty, yet he chooses instead to exploit his position for personal gain. Holmes’s reflections on Brunton serve as a commentary on the moral responsibilities that come with intelligence and capability.
Holmes also contrasts Brunton’s talents with those of Reginald Musgrave, illustrating a subtle critique of how such skills are utilized within the confines of class. While Reginald may hold the title and the power, it is Brunton who possesses critical knowledge that could alter the trajectory of the family’s legacy. This inversion of power dynamics adds complexity to Brunton’s role; he is both the servant and the one who holds significant insight into the family’s history. Holmes’s recognition of this fact hints at a broader commentary on the nature of knowledge and authority, suggesting that true power lies not solely in social status but in the understanding and application of information.
In summary, Holmes’s comments on Brunton’s talents and knowledge are multi-faceted, revealing a character who is both gifted and tragically flawed. The butler’s intelligence and resourcefulness make him a key player in the unfolding mystery, yet they also lead him down a path of moral ambiguity. Through Holmes’s insights, Doyle crafts a complex portrait of Brunton, prompting readers to reflect on the ethical implications of intelligence when it is misaligned with loyalty and integrity. Ultimately, Brunton serves as a cautionary figure, illustrating how exceptional talents can lead to one’s downfall when harnessed for selfish ends.
prompt: “Comment specifically on Holmes’ comments on Brunton’s talents and knowledge for 500 words”