Currently chatGPT puts a lot of things at the end of it's output, like "to be continued..." or "Chapter continues." and at the start of it's output, like the chapter name again, even though it's being told to continue writing the chapter, or "The Chapter Continues" at the start.
It needs to check its output to get rid of these parts, and output only the part of the chapter that it needs to write. This is probably easy to fix, but it either doubles or triples the cost per book, unless the GPT model is being ran locally. The cost of having it do this is why I may not write this part into the program, or I might write it in a way so that this feature is optional via a checkbox. It would be nice to not have to manually edit all this stuff out of each chapter.
Hopefully OpenAI releases its next model soon, because it might not have that problem, and that would save money on the number of tokens required I think.
Currently chatGPT puts a lot of things at the end of it's output, like "to be continued..." or "Chapter continues." and at the start of it's output, like the chapter name again, even though it's being told to continue writing the chapter, or "The Chapter Continues" at the start.
It needs to check its output to get rid of these parts, and output only the part of the chapter that it needs to write. This is probably easy to fix, but it either doubles or triples the cost per book, unless the GPT model is being ran locally. The cost of having it do this is why I may not write this part into the program, or I might write it in a way so that this feature is optional via a checkbox. It would be nice to not have to manually edit all this stuff out of each chapter.
Hopefully OpenAI releases its next model soon, because it might not have that problem, and that would save money on the number of tokens required I think.