How can we effectively fine-tune an OCI AI LLM for Code Generation?
Summary:
We wish to use a model (LLM) capable of generating code in JSON format. To achieve this, we plan to train the model with a collection of scripts of the types of code we want to generate, which will serve as examples to fit specific scenarios defined by the user. The main issue lies in the training dataset for fine-tuning, as being limited to words restricts the model's ability to learn code. The ultimate goal of using OCI AI services is to automate 5G use cases or testing. Another idea we have is to obtain information from 3GPP, where all
0