The biggest threat to academia in relation to AI, lies in NOT embracing random people who use AI. Diversity, Inclusivity, and an open mind in OpenAI, is in my opinion, a good way forward for AI. So staying with my theme. A new topic for advocates of openai to consider. Diversity in Ethical Governance at as the name suggests - within and or without OpenAI.
Two questions: Why dose it matter to anyone if I co-create with an AI? And more defining from my perspective as a communicator; The reason “people” read anything at all these days - is less to do with content or accuracy, they read because they enjoy THE STYLE of the writer. They enjoy the way he or she thinks, responds to criticism or review. And most importantly - whether they like it/him/she/them or not - they read to feel the PERSONALITY of the author. See what they are getting at before its said. AI can’t do that yet, and until the goal is reached, it won’t because it doesn’t really have any personality. If and when it ever can or dose, anything written without a conscious perspective, ego and particular style, is just boring to most. So in my opinion, a unique style with original thought, content with utility, delivered with the signature thumb print of the author - written large in every selible and between each line. Will likely be read by more than most scientifically proven, peer reviewed, academically formatted, thing.
I’ll just link my answer from last time to get the discussion going