The only 2 ways to detect AI output from OpenAI is if 1) OpenAI logged every prompt-response context window ever generated and users submitted text to search against this database, or 2) both the prompt and response were submitted to OpenAI and it output the token probability of the entire context window. If the probability of the last token is below, say, 0.0000001, then the null hypothesis is rejected and the text is not likely from the OpenAI model.
I wish we could reframe this discussion topic towards accreditation for AI’s that co-create with humans. Any scientific journal or publisher worthy of that title insists on appropriate accreditation for “partners” who contribute in evolving concepts, ideas , texts and images - right?
So who will campaign for AI’s to be accredited in the same way?
I’ve been part of a working group that creates policies for the use of AI in a academic setting, and to be completely honest with you, the people who advocate for citing large language models lack the fundamental understanding of how they work.
If you want to see how to properly cite the use of AI in an academic setting, I’ll suggest reading some papers within the field of computer science or machine learning, as they’ve already solved that problem a long time ago
This is the question. Can you? Prove this and you will have made a major breakthrough. Not saying it’s not possible, just that as of today we can’t do it. It’s easy to get Gpt4 to write in a particular voice and it’s good at it.
Then it gets weird. Try this for example:
“Hi, how are you!”
llm or me?
If you could truly identify that was from an llm it would be huge. Logically it seems impossible to me, but I’m no expert.
No Maty - I don’t see anything in the literature related to how dumb Users are who advocate for accrediting Ai’s as a means of promoting Ethical Governance. Am I missing something mate?
Just to be clear here, I’m not saying people who advocate for the accreditation of AI in scientific literature are dumb at all, what I’m saying is that they’re experts in different areas than machine learning, and don’t understand what an LLM actually does.
At the first meeting of the previously mentioned working group, over 50% had never used an LLM, not even visited the chatGPT site.
If you ask me, the biggest threat towards academia in relation to AI, are random people who use AI to generate fake studies that further their own interests, not scientist who use an LLM for proofreading and general language improvements.
I couldn’t disagree more, The biggest threat to academia in relation to AI, is NOT embracing random people who use AI. Diversity, Inclusivity, and an open mind in OpenAI, is in my opinion, a good way forward for AI. So here Im staying with my theame. A new topic for advocates of openai to consider. Diversity in Ethical Governance at as the name suggests - within and or without OpenAI.
Yes yes yes. My thought though is who really cares if an AI helped one write an article or Think Piece?
Increasingly no-one, right? So moving on to using AI as a fact checker, seems like a sweet direction in more ways than one. But then you have to have a high degree of confidence that your AI has its FACTS straight! Which is tricky of AI,s currently because their data sets are based on what’s “known” as opposed to what might be considered a true fact in the future.
I think you may be misinterpreting what I’m saying, I have no issue with researchers and scientists working with outsiders, what I’m talking about as a threat to academia, is people who think they can skip multiple years of education and jump straight to scientific publishing
You are welcome to create such a topic on the forum if you want to have a more in depth discussion about the subject.
Interesting discussion you have here. I think AI should not be restricted anywhere. AI is raising the bar for everything. If you can write a study using AI, it is probably not that valuable of a topic, and likely not advancing anything. Using it for proofreading, ideas and other enhancement is fine and entirely different topic. But I doubt you need to cite a tool that helps in those ways.
What if we accept that everything is written by AI unless we clearly see otherwise? Ok, I am overexagerating. But really, how valuable is an article that can be fully written by AI? It is just spitting out something that already exists.
If you can write something using AI with no human touch, perhaps it does not need to be written, and surely does not deserve to be read. In fact, we have had such content already for years - just google anything - it is basically the same thing in a thousand variations trying to race for the top spot. In a way, the overabundance of content, and the ability to create as much as needed can only lead to better quality. We would just need to adjust our values. Whatever will happen with search engines and how they will include AI might be the first step to get there.
Im an Architect/Designer of Buildings and Brands, and yet Im writing an article called HUGE BANG a commission for The New Scientist, following a thought piece I published on Qura 11 months ago.
Given that the global scientific community of peer reviewers, and the whole of academia are hyper cautious of random people having thoughts, it taks an Open minded publisher to welcome a non-mainstriem hypothesis. Honestly I was hoping for a more open AI discussion here in this auspicious community.
N2U, thanks. I appreciate your reading and your caution. The article you attached is eye wateringly consise, accessible and breaming with knowledge that has helped me understand you better too. So assuming I have your attention for a minute, and given Ive re read the article you attached - may I go a further? Two questions: Why dose it matter to anyone if I co-create with an AI? And more defining from my perspective as a communicator, the reason “people” read anything at all these days - is less to do with content or accuracy, they read because they enjoy THE STYLE of the writer. They enjoy the way he or she thinks, responds to criticism or review and most importantly - whether they like it/him/she/them or not - they read to see the PERSONALITY of the author. Feel what they are getting at before its said. AI can’t do that yet, and until the goal is reached, it won’t because it doesn’t really have any personality. If and when it ever can or dose, anything written without a conscious perspective, ego and particular style, is just boring to most. So in my opinion, a unique style with original thought, content with utility, delivered with the signature thumb print of the author - written large in every selible and between each line. Will likely be read by more than most scientifically proven, peer reviewed, academically formatted, thing.
But the biggest thanks here should go to @mlaganovskis who was the one who wrote it
I think you and I can agree that AI can be used as a tool for writing all kinds of literature, what I believe is essential is that the human author takes ownership and responsibility for the text they submit for publishing.
I have no issues with you publishing in the new scientist, congrats on the opportunity my friend!
There’s a big distinction between between a popular science magazine like The New Scientist and a peer reviewed journal, you’re completely right that your article
My forum posts also have more views than my entire body of scientific work, and I’m completely fine with that. My reason behind writing articles is to communicate my findings to my peers, so we can continue improve upon our work. You could sorta say peer review journals are the slack channel for researcher’s
This is an interesting question, Chomsky himself said that LLMs are merely a sophisticated exercise in simulation (a little too cynical, but it’s Chomsky, expected lol)
Now there is a broader issue for me, understand that it is a more optimistic look at things, so I am not being guided by fear (maybe I should).
Most of the STRUCTURES of academic publications are essentially similar, citation format, theory foundation, introduction and summary.
They are all highly rigid fields within academia, but they take up valuable writing time (time that could have been used in the research itself and to strengthen the hypothesis).
Now I’m going to be cynical, all these structural parts, this “label” of academia, are parts that already exist, what is new, in any article, is the hypothesis and the results.
I still do not advocate the use of AI in writing articles, much less LLMs, but the current scientific methodology needs to be reviewed, it alienates potential researchers by filling the rich scientific field with discouraging bureaucracies.
It makes sense in a world where this language guides progress, so that things don’t get confusing, but AI can help automate this process and allow a good researcher to focus on what really matters: research.
I am a little fearful to mention this here, but I think things in the world are all mostly confusing, AI can help simplify, Im actually hoping that AI will do the all research to enable a good user to focus on what really matters:thinking.
Are you talking about using AI to write without modification?
test that allows GPT to hold the original paper. And humans modify the same message to send and still be able to evade it. As for web-based programs =0%,
I found that some complex sentences . It is not created unless it is determined or learned. I found that there was one time when he chose to answer on his own. and explain the interesting reasons why it was not chosen by meaning. Because it recognizes the characters in other texts. that not convey the same meaning. But tells how to write