google.generativeai.ChatSession#
|  View source on GitHub | 
Contains an ongoing conversation with the model.
google.generativeai.ChatSession(
    model: GenerativeModel,
    history: (Iterable[content_types.StrictContentType] | None) = None,
    enable_automatic_function_calling: bool = False
)
>>> model = genai.GenerativeModel('models/gemini-1.5-flash')
>>> chat = model.start_chat()
>>> response = chat.send_message("Hello")
>>> print(response.text)
>>> response = chat.send_message("Hello again")
>>> print(response.text)
>>> response = chat.send_message(...
This ChatSession object collects the messages sent and received, in its
ChatSession.history attribute.
| Arguments | |
|---|---|
| The model to use in the chat. | |
| A chat history to initialize the object with. | |
| Attributes | |
|---|---|
| The chat history. | |
| returns the last received  | |
Methods#
rewind
rewind() -> tuple[protos.Content, protos.Content]
Removes the last request/response pair from the chat history.
send_message
send_message(
    content: content_types.ContentType,
    *,
    generation_config: generation_types.GenerationConfigType = None,
    safety_settings: safety_types.SafetySettingOptions = None,
    stream: bool = False,
    tools: (content_types.FunctionLibraryType | None) = None,
    tool_config: (content_types.ToolConfigType | None) = None,
    request_options: (helper_types.RequestOptionsType | None) = None
) -> generation_types.GenerateContentResponse
Sends the conversation history with the added message and returns the model’s response.
Appends the request and response to the conversation history.
>>> model = genai.GenerativeModel('models/gemini-1.5-flash')
>>> chat = model.start_chat()
>>> response = chat.send_message("Hello")
>>> print(response.text)
"Hello! How can I assist you today?"
>>> len(chat.history)
2
Call it with stream=True to receive response chunks as they are generated:
>>> chat = model.start_chat()
>>> response = chat.send_message("Explain quantum physics", stream=True)
>>> for chunk in response:
...   print(chunk.text, end='')
Once iteration over chunks is complete, the response and ChatSession are in states identical to the
stream=False case. Some properties are not available until iteration is complete.
Like GenerativeModel.generate_content this method lets you override the model’s generation_config and
safety_settings.
| Arguments | |
|---|---|
| 
 | The message contents. | 
| 
 | Overrides for the model’s generation config. | 
| 
 | Overrides for the model’s safety settings. | 
| 
 | If True, yield response chunks as they are generated. | 
send_message_async
send_message_async(
    content,
    *,
    generation_config=None,
    safety_settings=None,
    stream=False,
    tools=None,
    tool_config=None,
    request_options=None
)
The async version of ChatSession.send_message.