Please enable JavaScript.
Coggle requires JavaScript to display documents.
The Robots are coming: Exploring the Implications of OpenAI Codex on…
The Robots are coming: Exploring the Implications of OpenAI Codex on Introductory Programming
Research Question
How does Codex perform on first year assessments compared with CS1 students?
How does Codex perform on variations of a benchmark computing education problem that differ on context and level of detail?
How much variety is there in the solutions generated by Codex?
Future Work
What problem types are difficult for tools such as GPT-3 and Codex? What is the performance of such tools on Parsons problems, MCQs, and problems with contextual specifications.
How does Codex perform on other question types such as “Explain in plain English”, identifying bugs in code, and fact-based questions (e.g., list all the identifiers in the code provided)?
Can automated plagiarism detection tools identify code generated by Codex?
Can tools like Codex be utilised to detect plagiarism?
How can Codex be used to improve student learning?
How should we adapt course content and assessment approaches as the use of tools such as Codex becomes more prevalent?
Methodology
AI evaluation
CS1 Programming Tests
Rainfall Problem Variants
Future direction of CS pedagogy
Emphasis on code evaluation for introductory programs
Reliance in supervised exams
AI tools can assess students' submissions
Challenges
What is considered as an academic misconduct?
Cheating may be exacerbated
Over-reliance to Codex
No way to detect if submitted code is written by an AI