You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Purpose
<!-- Describe the intention of the changes being proposed. What problem
does it solve or functionality does it add? -->
Update readme and link to .net 8.
## Does this introduce a breaking change?
<!-- Mark one with an "x". -->
```
[ ] Yes
[x ] No
```
## Pull Request Type
What kind of change does this Pull Request introduce?
<!-- Please check the one that applies to this PR using "x". -->
```
[ ] Bugfix
[ ] Feature
[ ] Code style update (formatting, local variables)
[ ] Refactoring (no functional changes, no api changes)
[x ] Documentation content changes
[ ] Other... Please describe:
```
## How to Test
Check link in readme that it goes to .net 8 page.
---------
Co-authored-by: David Pine <[email protected]>
-[Powershell 7+ (pwsh)](https://github.com/powershell/powershell) - For Windows users only.
103
103
@@ -286,4 +286,4 @@ to production. Here are some things to consider:
286
286
287
287
**_Question_**: Why do we need to break up the PDFs into chunks when Azure Cognitive Search supports searching large documents?
288
288
289
-
**_Answer_**: Chunking allows us to limit the amount of information we send to OpenAI due to token limits. By breaking up the content, it allows us to easily find potential chunks of text that we can inject into OpenAI. The method of chunking we use leverages a sliding window of text such that sentences that end one chunk will start the next. This allows us to reduce the chance of losing the context of the text.
289
+
**_Answer_**: Chunking allows us to limit the amount of information we send to OpenAI due to token limits. By breaking up the content, it allows us to easily find potential chunks of text that we can inject into OpenAI. The method of chunking we use leverages a sliding window of text such that sentences that end one chunk will start the next. This allows us to reduce the chance of losing the context of the text.
0 commit comments