Optional
apiprocess.env.GROQ_API_KEY
Optional
maxThe maximum number of tokens that the model can process in a single response. This limits ensures computational efficiency and resource management.
Optional
modelThe name of the model to use.
"llama2-70b-4096"
Optional
stopUp to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
Optional
streamingWhether or not to stream responses.
Optional
temperatureThe temperature to use for sampling.
0.7
Generated using TypeDoc
The Groq API key to use for requests.