Skip to content

Commit cbe158e

Browse files
Update readme to net 8 (Azure-Samples#208)
## Purpose <!-- Describe the intention of the changes being proposed. What problem does it solve or functionality does it add? --> Update readme and link to .net 8. ## Does this introduce a breaking change? <!-- Mark one with an "x". --> ``` [ ] Yes [x ] No ``` ## Pull Request Type What kind of change does this Pull Request introduce? <!-- Please check the one that applies to this PR using "x". --> ``` [ ] Bugfix [ ] Feature [ ] Code style update (formatting, local variables) [ ] Refactoring (no functional changes, no api changes) [x ] Documentation content changes [ ] Other... Please describe: ``` ## How to Test Check link in readme that it goes to .net 8 page. --------- Co-authored-by: David Pine <[email protected]>
1 parent 3435fc4 commit cbe158e

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ A related option is VS Code Remote Containers, which will open the project in yo
9797
Install the following prerequisites:
9898

9999
- [Azure Developer CLI](https://aka.ms/azure-dev/install)
100-
- [.NET 7](https://dotnet.microsoft.com/download)
100+
- [.NET 8](https://dotnet.microsoft.com/download/dotnet/8.0)
101101
- [Git](https://git-scm.com/downloads)
102102
- [Powershell 7+ (pwsh)](https://github.com/powershell/powershell) - For Windows users only.
103103

@@ -286,4 +286,4 @@ to production. Here are some things to consider:
286286

287287
**_Question_**: Why do we need to break up the PDFs into chunks when Azure Cognitive Search supports searching large documents?
288288

289-
**_Answer_**: Chunking allows us to limit the amount of information we send to OpenAI due to token limits. By breaking up the content, it allows us to easily find potential chunks of text that we can inject into OpenAI. The method of chunking we use leverages a sliding window of text such that sentences that end one chunk will start the next. This allows us to reduce the chance of losing the context of the text.
289+
**_Answer_**: Chunking allows us to limit the amount of information we send to OpenAI due to token limits. By breaking up the content, it allows us to easily find potential chunks of text that we can inject into OpenAI. The method of chunking we use leverages a sliding window of text such that sentences that end one chunk will start the next. This allows us to reduce the chance of losing the context of the text.

0 commit comments

Comments
 (0)