GitHub has launched a technical preview of machine learning (ML) assistant Copilot. The operators of the version management platform refer to it as "AI Pair Programmer": Like pair programming, the virtual assistant is supposed to give suggestions on how to improve or replace the source code.
The system is based on OpenAI Codex, an AI system designed to convert natural language into source code. OpenAI is behind the Generative Pre-trained Transformer 3 (GPT-3) language model. GitHub parent Microsoft had invested $1 billion in the company in 2019, which switched from a non-profit business to a capped-profit model two years ago.
copilot when coding
Microsoft uses GPT-3 since May for the low-code platform Power Fx, in order to translate requirements formulated in natural language into Power Fx syntax. OpenAI Codex, in contrast to GPT-3, is specifically designed for code generation. Thus, a large part of the training data for the ML model consists of public source code. This puts Codex in competition with the ML-based code completion tool Tabnine.
The comment serves as the basis for the function created by Copilot.
The Copilot page on GitHub shows some examples where the system creates a complete function based on a comment, among other things. It is also supposed to be able to suggest meaningful unit tests based on the source code and help with creating boilerplate code like getters and setters as well as replacing repetitive definitions.
The wizard recognizes the scheme of the definition and extends the comment characters for other programming languages.
Currently, the copilot only uses the respective file to establish the context, but GitHub is probably planning to expand to the entire project in the medium term. Before using, it’s worth taking a look at the FAQ at the bottom of the Copilot page, which includes information on telemetry, personal data and responsible AI. Among other things, the FAQ points out that the generated code may not be perfect and may even be useless.