Skip to content

Commit 204406e

Browse files
committed
Draft
1 parent f9e0a15 commit 204406e

File tree

109 files changed

+4224
-1244
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

109 files changed

+4224
-1244
lines changed

docs/examples/chain_of_density.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,13 +67,13 @@ model = outlines.from_transformers(
6767
transformers.AutoTokenizer.from_pretrained(MODEL_NAME)
6868
)
6969
prompt = chain_of_density(article=article)
70-
result = model(prompt, Summaries, max_new_tokens=2000)
70+
result = model(prompt, Summaries, max_new_tokens=2000).content
7171
```
7272

7373
We can now check the results:
7474

7575
```python
76-
print(result)
76+
print(result.content)
7777
# {'summaries': [
7878
# {
7979
# 'missing_entities': 'English mathematician, cryptanalyst, philosopher',

docs/examples/chain_of_thought.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ We obtain a series of intermediate reasoning steps as well as the conclusion:
111111
```python
112112
import json
113113

114-
json_response = json.loads(response)
114+
json_response = json.loads(response.content)
115115

116116
print(json_response["reasoning"])
117117
print(json_response["conclusion"])

docs/examples/classification.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ prompts = [customer_support(request=request) for request in requests]
5252
We can now ask the model to classify the requests:
5353

5454
```python
55-
labels = generator(prompts)
55+
labels = generator(prompts).content
5656
print(labels)
5757
# ['URGENT', 'STANDARD']
5858
```
@@ -79,7 +79,7 @@ We can then create a generator with the Pydantic model we just defined and call
7979

8080
```python
8181
generator = outlines.Generator(model, Classification)
82-
labels = generator(prompts)
82+
labels = generator(prompts).content
8383
print(labels)
8484
# ['{"label":"URGENT"}', '{ "label": "STANDARD" }']
8585
```

docs/examples/dating_profiles.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -165,7 +165,7 @@ it's a good excuse for a date. I watch the latest series because I'm paying,
165165
with my hard-earned money, for every streaming service."""
166166

167167
prompt = dating_profile_prompt(description=new_description, examples=samples)
168-
profile = model(prompt, DatingProfile)
168+
profile = model(prompt, DatingProfile).content
169169
parsed_profile = DatingProfile.model_validate_json(json.loads(profile))
170170
```
171171

docs/examples/deploy-using-bentoml.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -154,7 +154,7 @@ We then need to define an HTTP endpoint using `@bentoml.api` to decorate the met
154154
from outlines.types import JsonSchema
155155

156156
generator = outlines.Generator(self.model, JsonSchema(json_schema))
157-
character = generator(prompt)
157+
character = generator(prompt).content
158158

159159
return json.loads(character)
160160
```
@@ -200,7 +200,7 @@ with bentoml.SyncHTTPClient("http://localhost:3000") as client:
200200
response = client.generate(
201201
prompt="Give me a character description"
202202
)
203-
print(response)
203+
print(response.content)
204204
```
205205

206206
</details>

docs/examples/deploy-using-cerebrium.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ def generate(
110110

111111
character = generator(
112112
f"<s>[INST]Give me a character description. Describe {prompt}.[/INST]"
113-
)
113+
).content
114114

115115
return character
116116
```

docs/examples/deploy-using-modal.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ def generate(
161161
# by models, so make sure to check the model's documentation.
162162
character = generator(
163163
f"<s>[INST]Give me a character description. Describe {prompt}.[/INST]"
164-
)
164+
).content
165165

166166
# Print out the generated character.
167167
print(character)

docs/examples/earnings-reports.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -246,7 +246,7 @@ Provide the prompt to the model and run it:
246246
csv_data = csv_extractor(
247247
extract_financial_data_prompt(columns_to_extract, income_statement),
248248
max_new_tokens=1024,
249-
)
249+
).content
250250

251251
print(csv_data)
252252
```

docs/examples/extract_event_details.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ class Event(BaseModel):
4545
prompt = prompt_template(now=now, message=message)
4646

4747
# Extract the event information
48-
event = generator(prompt)
48+
event = generator(prompt).content # type: ignore
4949

5050
# Print the current date and time
5151
print(f"Today: {now}")

docs/examples/extraction.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ prompts = [take_order(order=order) for order in orders]
7979
generator = outlines.Generator(model, Order)
8080

8181
results = generator(prompts)
82-
print(results)
82+
print(results.content)
8383
# ['{"pizza": "Pepperoni", "number": 2}',
8484
# '{"pizza": "Margherita", "number": 12}']
8585
```

0 commit comments

Comments
 (0)