You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix languageCode parameter in action_code_url (#8912)
* Fix languageCode parameter in action_code_url
* Add changeset
Vaihi add langmodel types. (#8927)
* Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl
* Adding LanguageModel types.
* Remove bunch of exports
* yarn formatted
* after lint
Define HybridParams (#8935)
Co-authored-by: Erik Eldridge <[email protected]>
Adding smoke test for new hybrid params (#8937)
* Adding smoke test for new hybrid params
* Use the existing name of the model params input
---------
Co-authored-by: Erik Eldridge <[email protected]>
Moving to in-cloud naming (#8938)
Co-authored-by: Erik Eldridge <[email protected]>
Moving to string type for the inference mode (#8941)
Define ChromeAdapter class (#8942)
Co-authored-by: Erik Eldridge <[email protected]>
VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943)
Adding count token impl (#8950)
VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949)
Define values for Availability enum (#8951)
VinF Hybrid Inference: narrow Chrome input type (#8953)
Add image inference support (#8954)
* Adding image based input for inference
* adding image as input to create language model object
disable count tokens api for on-device inference (#8962)
VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965)
Copy file name to clipboardExpand all lines: docs-devsite/vertexai.generativemodel.md
+12
Original file line number
Diff line number
Diff line change
@@ -29,6 +29,7 @@ export declare class GenerativeModel extends AIModel
29
29
30
30
| Property | Modifiers | Type | Description |
31
31
| --- | --- | --- | --- |
32
+
| [DEFAULT\_HYBRID\_IN\_CLOUD\_MODEL](./vertexai.generativemodel.md#generativemodeldefault_hybrid_in_cloud_model) | <code>static</code> | string | Defines the name of the default in-cloud model to use for hybrid inference. |
|[ImagenGCSImage](./vertexai.imagengcsimage.md#imagengcsimage_interface)| An image generated by Imagen, stored in a Cloud Storage for Firebase bucket.<!---->This feature is not available yet. |
102
103
|[ImagenGenerationConfig](./vertexai.imagengenerationconfig.md#imagengenerationconfig_interface)| <b><i>(Public Preview)</i></b> Configuration options for generating images with Imagen.<!---->See the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images-imagen) for more details. |
103
104
|[ImagenGenerationResponse](./vertexai.imagengenerationresponse.md#imagengenerationresponse_interface)| <b><i>(Public Preview)</i></b> The response from a request to generate images with Imagen. |
0 commit comments