I would like to have the opinions of the community here as well as from someone in the OpenAI team on this. Currently, a lot of candidates are rejected due to automation like keyword matching.
I am asking this question because I see “Automated determinations of eligibility for credit, employment, educational institutions, or public assistance services” as a disallowed usage of OpenAI technology and I agree to it fully. Still, are the following use cases legit and allowed?
-
TAs will give criteria like “Candidate should have worked with Azure DataBricks and Spark”, “Candidates must be using Java for more than 4 years”, and “Candidate should be currently in a mid-senior manager role” and we feed this and an anonymized resume to ChatGPT API and ask it to say “yes, the candidate has bla bla…”, “no, the candidate has bla bla…” in a json format.
-
There will be open-ended questions like “Describe a time you had a challenge related to backend development. How did you approach it?” and criteria like “Looking for a candidate’s ability to break down problems into smaller problems” and ask it to assign a score out of 10 and reason.
In both the above use-cases
-
we are sending no PII to reduce any bias due to that, we prefer not sending even the gender. In the second use case, chatgpt doesn’t even know about the person involved in answering the question.
-
No hiring decision is made automatically, the scores are objective and the answers are disclosed with the possibility of human override. It is up to the TAs to decide based on the information available who to go for an interview with.
Let me know, how safe you think these approaches are and let me know if you have any ideas to make it more safe.