TLDR: I need English Professors to give homework for the A.I. Please
I have been exploring the idea of a machine that can assist humans in writing in an academic level, let it be argumentative essays, creative writting and so on…
David @daveshapautomator idea of NLCA and his books have been very enlightening on the past 2 years.
Recently a post [1] from Anna Mills @annarmills where David implement a simple solution on this topic shows this is a problem that many have been put though on.
I would like to explore more this topic in depth, however I cannot really measure how effective the machine is without humans to base it off.
I would love if I could find English professors that maybe could provide assignments they give their students and we can build a dataset to discover how effective GPT-3 versus humans.
I think its important to understand how effective the LLMs are at constructing text, taking as example AlphaGo game with Lee Sedol. Without such machine it would may never have pushed him into doing the move that has been labeled the GOD move in GO. A writer assistant can be a tool to push us above and beyond to increase our abilities, just as much AI has helped with chess and game of GO and much more.
References
[1] Using metaprompting to allow GPT-3 to design its own cognitive tasks
You can also look for librarians (highly literate people) as well as journalists, who are trained in rhetoric. My girlfriend is going her thesis on characterizing GPT-3 output, otherwise I’m sure she’d be interested to help out
I’m an English professor (annarmills.com) and I’ve started working on testing GPT3 on my prompts, mostly with the goal of sharing information with other teachers about what we can expect. I like the idea of a writing assistant stimulating writers to improve and have been experimenting with this in my own writing–I just don’t want students to turn in AI-generated work without acknowledging it. So I’d love to know more about your interests and goals.
Also, you can get a lot of sample prompts from MIT Courseware. There are so many open-licensed assignments on LibreTexts.org and OER Commons too. I guess the time-consuming part is to evaluate how well GPT3 does on the assignment’s grading rubric, though.
If there is enough labeled graded assignments we can create a self-supervised grading machine (so it can label unlabelled data). That is the next step.
However, I have to first discover the feasability of the project on a small scale.
Then discover larger datasets of assignments, then create the means to expand it using unlabeled data.
My goal is understand and develop cognition which is fundamental for AGI.
I need an insight on how English Professors come up with their lecture plans and assignments.
Being an engineer this kinda of stuff also seemed like witchcraft for me hahaha
I am a Lawyer but graduated from a Bilingual School. So apart from my native Spanish i also speak English, German and Russian. Happy to help! I have lots of old language books for the IGCSE and other international certifications.
Problem is i dont know really how davinci models will perform, because most of this Reading and Writing books were inspired in an era pre internet via letters and such so i am unsure if the models had enough training with this subjects.
That’s funny that it seems like witchcraft! To me it doesn’t seem that different from prompt engineering (as I’m starting to understand that). But I’ve been doing it so long…
We often use rubrics, sets of grading criteria. In some cases, we choose to assign points for each criterion as well as giving feedback on each criterion. Here’s an example of an assignment with with rubric.
Intriguing connection to cognition and AGI–I don’t totally get that. Maybe you mean if it can grade assignments that looks like critical thinking? I try to keep the perspective clear in my own head that GPT3 isn’t doing critical thinking even though it’s using statistics to produce word combinations that simulate and stimulate critical thinking.
You have a Writing task and then you have the Rubric.
The task is composed of many goals which are the cognitive tasks.
The transformer models can associate the Rubric with the task and understand if its achieving its goal or not.
GPT-3 can do critical thinking and planning if you add the pieces to it.
The self-headed attention is not just using statistics to produce word combinations that simulate and stimulate critical thinking.
labeled graded assignments gives context to the transformers. The Goal is to get it to make the essays without any prompt engineering, zero-shot
I see. How many labeled, evaluated essays would you need to get started? If it’s a small number we might find CC BY licensed student essays in OER textbooks that we could use. Here are some sample papers licensed that way. Otherwise it seems like student consent/compensation would be needed and then a teacher or teachers would need to do the labeling.
Honestly I dont know how many is enough. Probably on the thousands.
The other problem is the format. Making a dataset from pdf will probably be quite the endeavour.
You might want to see the AP prompts for the AP Language class. I’ve been curious whether I could train GPT-3 to pass one of the extended response questions (known as Q3). You can see the past questions here as well as the top responses.
Would a linguist with a master’s in education be of any use? That is, if I can ever figure out how to get started doing Evals in the first place. Seems to be a huge secret.
DAK
I think I put my reply in the wrong place. First time in this OpenAI community. Here’s a copy of what I wrote: “Would a linguist with a master’s in education be of any use? That is, if I can ever figure out how to get started doing Evals in the first place. Seems to be a huge secret.
DAK”