diff --git a/demystifying_large_language_models/demystifying_large_language_models.md b/demystifying_large_language_models/demystifying_large_language_models.md
index 6ac70d01..094b8910 100644
--- a/demystifying_large_language_models/demystifying_large_language_models.md
+++ b/demystifying_large_language_models/demystifying_large_language_models.md
@@ -2,7 +2,7 @@
module_id: demystifying_large_language_models
author: Joy Payton
email: paytonk@chop.edu
-version: 1.0.6
+version: 1.0.7
current_version_description: Initial version
module_type: standard
docs_version: 2.0.0
@@ -281,9 +281,10 @@ Many of us already use digital assistants like Siri or Alexa. These digital ass
LLMs require significant energy (with its associated carbon burden) as well as other resources, such as water for data center cooling needs and materials for computer chips. The extraction and use of these resources can harm the environment and directly and indirectly harm people. People are also potentially harmed by other aspects of LLMs, including employment related risks as jobs are automated or made less creative by the use of LLMs. Artists and authors, for example, are already contending with threats to their livelihood and the use of their works as training data. Finally, there is a risk of widening the digital divide and exacerbating differences in access to the benefits of technology.
-
Important note
+
Important note
- comic by xkcd, [CC BY-NC 2.5](https://xkcd.com/license.html).")This risk framework reflects a point in time, and you may think of risk types or examples that aren't reflected here. Suffice it to say, LLMs, like any laboratory tool, must be used with discretion and an awareness of the risks, costs, tradeoffs, and utility they provide.
+
+This risk framework reflects a point in time, and you may think of risk types or examples that aren't reflected here. Suffice it to say, LLMs, like any laboratory tool, must be used with discretion and an awareness of the risks, costs, tradeoffs, and utility they provide.