google.generativeai.types.GenerationConfig#
|  View source on GitHub | 
A simple dataclass used to configure the generation parameters of GenerativeModel.generate_content.
google.generativeai.types.GenerationConfig(
    candidate_count: (int | None) = None,
    stop_sequences: (Iterable[str] | None) = None,
    max_output_tokens: (int | None) = None,
    temperature: (float | None) = None,
    top_p: (float | None) = None,
    top_k: (int | None) = None,
    seed: (int | None) = None,
    response_mime_type: (str | None) = None,
    response_schema: (protos.Schema | Mapping[str, Any] | type | None) = None,
    presence_penalty: (float | None) = None,
    frequency_penalty: (float | None) = None,
    response_logprobs: (bool | None) = None,
    logprobs: (int | None) = None
)
 | Attributes | |
|---|---|
| Number of generated responses to return.
 | |
| The set of character sequences (up
to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop sequence. The stop sequence will not be included as part of the response. | |
| The maximum number of tokens to include in a
candidate. If unset, this will default to output_token_limit specified in the model’s specification. | |
| Controls the randomness of the output. Note: The
default value varies by model, see the  Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in more straightforward responses from the model. | |
| Optional. The maximum cumulative probability of tokens to
consider when sampling. The model uses combined Top-k and nucleus sampling. Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits number of tokens based on the cumulative probability. Note: The default value varies by model, see the
 | |
| 
 Optional. The maximum number of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling. Top-k sampling considers the set of  Note: The default value varies by model, see the
 | |
| Optional.  Seed used in decoding. If not set, the request uses a randomly generated seed.
 | |
| Optional. Output response mimetype of the generated candidate text.
Supported mimetype:
 | |
| Optional. Specifies the format of the JSON requested if response_mime_type is
 | |
| Optional.
 | |
| Optional.
 | |
| Optional. If true, export the `logprobs` results in response.
 | |
| Optional. Number of candidates of log probabilities to return at each step of decoding.
 | |
Methods#
__eq__
__eq__(
    other
)
Return self==value.
| Class Variables | |
|---|---|
| 
 | |
| 
 | |
| 
 | |
| 
 | |
| 
 | |
| 
 | |
| 
 | |
| 
 | |
| 
 | |
| 
 | |
| 
 | |
| 
 | |
| 
 | |