You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ChatMessage(role="system", content="You're a honest assistant."),
80
+
ChatMessage( role="user", content="There's a llama in my garden, what should I do?"),
81
+
],
82
+
[
83
+
ChatMessage(role="user", content="What's the population of the world?"),
84
+
],
85
+
],
86
+
max_tokens=2048,
87
+
)
88
+
89
+
for result in results:
90
+
print(f"RESULT: \n{result}\n\n")
91
+
```
92
+
56
93
`llmlite` also supports other parameters like `temperature`, `max_length`, `do_sample`, `top_k`, `top_p` to help control the length, randomness and diversity of the generated text.
57
94
58
95
See **[examples](./examples/)** for reference.
@@ -62,14 +99,14 @@ See **[examples](./examples/)** for reference.
62
99
You can use `llmlite` to help you generate full prompts, for instance:
63
100
64
101
```python
65
-
from llmlite.apisimportChatMessage, LlamaChat
102
+
from llmlite importChatLLM
66
103
67
104
messages = [
68
105
ChatMessage(role="system", content="You're a honest assistant."),
69
106
ChatMessage(role="user", content="There's a llama in my garden, what should I do?"),
0 commit comments