diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
index 05b559d21..fe89f50b6 100644
--- a/.github/pull_request_template.md
+++ b/.github/pull_request_template.md
@@ -1,7 +1,8 @@
# Description
-_Please include a summary of the change and which issue is fixed. Please also include relevant
-motivation and context. List any dependencies that are required for this change._
+_Please include a summary of the change and which issue is fixed. Please also
+include relevant motivation and context. List any dependencies that are required
+for this change._
Fixes # (issue)
@@ -11,14 +12,15 @@ _Please delete options that are not relevant._
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
-- [ ] Breaking change (fix or feature that would cause existing functionality to not work as
- expected)
+- [ ] Breaking change (fix or feature that would cause existing functionality to
+ not work as expected)
- [ ] Documentation (update or new)
## How Has This Been Tested?
-_Please describe the tests that you ran to verify your changes. Provide instructions so we can
-reproduce. Please also list any relevant details for your test configuration_
+_Please describe the tests that you ran to verify your changes. Provide
+instructions so we can reproduce. Please also list any relevant details for your
+test configuration_
## Testing Checklist
diff --git a/.prettierrc b/.prettierrc
index ac50a21a1..5b5bd9933 100644
--- a/.prettierrc
+++ b/.prettierrc
@@ -1,4 +1,3 @@
{
- "printWidth": 100,
"proseWrap": "always"
}
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 5ab3aea32..7fa05f9be 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -1,16 +1,17 @@
# Contributing
-You can help to improve the Thoth Tech documentation by sending pull requests to this repository.
-Thank you for your interest and help!
+You can help to improve the Thoth Tech documentation by sending pull requests to
+this repository. Thank you for your interest and help!
-Feel free to make a proposal, we can discuss anything and if we don't agree we'll feel free not to
-merge it and we'll thank you for caring about it. We want to create a welcoming environment for
-everyone who is interested in contributing.
+Feel free to make a proposal, we can discuss anything and if we don't agree
+we'll feel free not to merge it and we'll thank you for caring about it. We want
+to create a welcoming environment for everyone who is interested in
+contributing.
## Documentation testing
-We treat documentation like code. Therefore, we use processes similar to those used for code to
-maintain standards and quality of documentation.
+We treat documentation like code. Therefore, we use processes similar to those
+used for code to maintain standards and quality of documentation.
We have tests:
@@ -18,23 +19,24 @@ We have tests:
### Run tests locally
-You can run these tests on your local computer. This has the advantage of speeding up the feedback
-loop. You can know of any problems with the changes in your branch without waiting for a CI pipeline
-to run.
+You can run these tests on your local computer. This has the advantage of
+speeding up the feedback loop. You can know of any problems with the changes in
+your branch without waiting for a CI pipeline to run.
To run the tests locally, it's important to:
- [Install the tools](#installation)
-- Run [linters](#lint-checks) the same way they are run in CI pipelines. It's important to use same
- configuration we use in CI pipelines, which can be different than the default configuration of the
- tool.
+- Run [linters](#lint-checks) the same way they are run in CI pipelines. It's
+ important to use same configuration we use in CI pipelines, which can be
+ different than the default configuration of the tool.
### Local linters
To help adhere to the
[documentation style guidelines](https://github.com/thoth-tech/handbook/blob/main/docs/processes/documentation/writing-style-guide.md),
-and improve the content added to documentation, [install documentation linters](#install-linters)
-and [integrate them with your code editor](#configure-editors).
+and improve the content added to documentation,
+[install documentation linters](#install-linters) and
+[integrate them with your code editor](#configure-editors).
At Thoth Tech, we mostly use:
@@ -48,13 +50,14 @@ At Thoth Tech, we mostly use:
#### Vale
-[Vale](https://docs.errata.ai/vale/about/) is a grammar, style, and word usage linter for the
-English language. Vale's configuration is stored in the
-[`.vale.ini`](https://github.com/thoth-tech/documentation/blob/main/.vale.ini) file located in the
-root directory.
+[Vale](https://docs.errata.ai/vale/about/) is a grammar, style, and word usage
+linter for the English language. Vale's configuration is stored in the
+[`.vale.ini`](https://github.com/thoth-tech/documentation/blob/main/.vale.ini)
+file located in the root directory.
-Vale supports creating [custom tests](https://docs.errata.ai/vale/styles) that extend any of several
-types of checks, which we store in the `.vale/thothtech/` directory in the documentation directory.
+Vale supports creating [custom tests](https://docs.errata.ai/vale/styles) that
+extend any of several types of checks, which we store in the `.vale/thothtech/`
+directory in the documentation directory.
##### Vale result types
@@ -72,13 +75,14 @@ we have implemented
[the Flesch-Kincaid grade level test](https://readable.com/readability/flesch-reading-ease-flesch-kincaid-grade-level/)
to determine the readability of our documentation.
-As a general guideline, the lower the score, the more readable the documentation. For example, a
-page that scores `12` before a set of changes, and `9` after, indicates an iterative improvement to
-readability. The score is not an exact science, but is meant to help indicate the general complexity
-level of the page.
+As a general guideline, the lower the score, the more readable the
+documentation. For example, a page that scores `12` before a set of changes, and
+`9` after, indicates an iterative improvement to readability. The score is not
+an exact science, but is meant to help indicate the general complexity level of
+the page.
-The readability score is calculated based on the number of words per sentence, and the number of
-syllables per word. For more information, see
+The readability score is calculated based on the number of words per sentence,
+and the number of syllables per word. For more information, see
[the Vale documentation](https://docs.errata.ai/vale/styles#metric).
#### Installation
@@ -87,20 +91,21 @@ syllables per word. For more information, see
##### macOS
-1. Install [Homebrew](https://brew.sh/), which is a package manager for macOS that allows you to
- easily install programs and tools through the Terminal. Visit their website for installation
- instructions.
+1. Install [Homebrew](https://brew.sh/), which is a package manager for macOS
+ that allows you to easily install programs and tools through the Terminal.
+ Visit their website for installation instructions.
1. Follow the
- [official instructions to install nvm](https://github.com/nvm-sh/nvm#installing-and-updating), a
- Node version manager. Then, run the following to install and use the repository's Node version:
+ [official instructions to install nvm](https://github.com/nvm-sh/nvm#installing-and-updating),
+ a Node version manager. Then, run the following to install and use the
+ repository's Node version:
```shell
nvm install
nvm use
```
- The required Node version should be automatically detected from the `.nvmrc` file. This can be
- confirmed by running `nvm which`.
+ The required Node version should be automatically detected from the `.nvmrc`
+ file. This can be confirmed by running `nvm which`.
1. Install all dependencies
@@ -110,9 +115,10 @@ syllables per word. For more information, see
##### Windows (using WSL2)
-1. Set up Windows Subsystem for Linux (WSL) and the Linux distribution. WSL allows Linux
- distributions to run on the Windows OS. Visit this
- [website](https://docs.microsoft.com/en-us/windows/wsl/install) for more information.
+1. Set up Windows Subsystem for Linux (WSL) and the Linux distribution. WSL
+ allows Linux distributions to run on the Windows OS. Visit this
+ [website](https://docs.microsoft.com/en-us/windows/wsl/install) for more
+ information.
```powershell
wsl --install
@@ -129,16 +135,17 @@ syllables per word. For more information, see
```
1. Follow the
- [official instructions to install nvm](https://github.com/nvm-sh/nvm#installing-and-updating), a
- Node version manager. Then, run the following to install and use the repository's Node version:
+ [official instructions to install nvm](https://github.com/nvm-sh/nvm#installing-and-updating),
+ a Node version manager. Then, run the following to install and use the
+ repository's Node version:
```shell
nvm install
nvm use
```
- The required Node version should be automatically detected from the `.nvmrc` file. This can be
- confirmed by running `nvm which`.
+ The required Node version should be automatically detected from the `.nvmrc`
+ file. This can be confirmed by running `nvm which`.
1. Install all dependencies
@@ -163,10 +170,11 @@ These tools can be [integrated with your code editor](#configure-editors).
### Configure editors
-Using linters in your editor is more convenient than having to run the commands from the command
-line.
+Using linters in your editor is more convenient than having to run the commands
+from the command line.
-To configure `prettier` in your editor, install one of the following as appropriate:
+To configure `prettier` in your editor, install one of the following as
+appropriate:
- Visual Studio Code
[`esbenp.prettier-vscode` extension](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode).
@@ -178,7 +186,8 @@ To configure Vale in your editor, install one of the following as appropriate:
### Lint checks
-The following commands can be run to format all Markdown files across the entire repository:
+The following commands can be run to format all Markdown files across the entire
+repository:
```shell
# Format markdown style (fixing markdown style issues)
@@ -196,5 +205,6 @@ npm run prose:check
## Contributing guidelines
-Contributing formatting guidelines, including git workflow and commit formatting requirements can be
-found in our [git contribution guide](docs/processes/quality-assurance/git-contribution-guide.md).
+Contributing formatting guidelines, including git workflow and commit formatting
+requirements can be found in our
+[git contribution guide](docs/processes/quality-assurance/git-contribution-guide.md).
diff --git a/README.md b/README.md
index eb2bef86e..ec5acaffa 100644
--- a/README.md
+++ b/README.md
@@ -11,28 +11,32 @@
[](https://github.com/thoth-tech/ThothTech-Documentation-Website/network/members)
[](https://github.com/thoth-tech/ThothTech-Documentation-Website/stargazers)
-Welcome to the **ThothTech Documentation Website** repository! This project serves as the central
-hub for all Thoth Tech resources, designed to provide a structured and accessible platform for
-documentation, product information, and team resources. This contains out long term documentation,
-such as the documentation for onboarding information, general information and company deliverables.
-
-Short term documentation such as spike reports and sprint reports are stored in the
-[Documentation](https://github.com/thoth-tech/documentation) repository.
-
-Built with Astro and Starlight, this website offers an organized space where users can explore Thoth
-Tech's mission, values, and goals, along with in-depth information on each of the company's products
-and services. Each product has its own dedicated section, featuring a brief overview, its purpose,
-and comprehensive documentation to support both users and development teams.
-
-The site also includes team documentation for each semester, highlighting the efforts of the
-individuals contributing to Thoth Tech's ongoing projects. Whether you're a developer, a team
-member, or a user, this website provides all the information needed to understand and contribute to
-Thoth Tech's vision and initiatives.
+Welcome to the **ThothTech Documentation Website** repository! This project
+serves as the central hub for all Thoth Tech resources, designed to provide a
+structured and accessible platform for documentation, product information, and
+team resources. This contains out long term documentation, such as the
+documentation for onboarding information, general information and company
+deliverables.
+
+Short term documentation such as spike reports and sprint reports are stored in
+the [Documentation](https://github.com/thoth-tech/documentation) repository.
+
+Built with Astro and Starlight, this website offers an organized space where
+users can explore Thoth Tech's mission, values, and goals, along with in-depth
+information on each of the company's products and services. Each product has its
+own dedicated section, featuring a brief overview, its purpose, and
+comprehensive documentation to support both users and development teams.
+
+The site also includes team documentation for each semester, highlighting the
+efforts of the individuals contributing to Thoth Tech's ongoing projects.
+Whether you're a developer, a team member, or a user, this website provides all
+the information needed to understand and contribute to Thoth Tech's vision and
+initiatives.
## Format Checks to Run
-To maintain code quality, please ensure you run the following commands before submitting a pull
-request:
+To maintain code quality, please ensure you run the following commands before
+submitting a pull request:
1. **Format the Files**:
@@ -54,8 +58,8 @@ request:
## Project Structure
-This website is built with Astro and uses the **Starlight Starter Kit** as a foundation. Below is a
-breakdown of the project structure:
+This website is built with Astro and uses the **Starlight Starter Kit** as a
+foundation. Below is a breakdown of the project structure:
```plaintext
.
@@ -71,9 +75,10 @@ breakdown of the project structure:
└── tsconfig.json # TypeScript configuration
```
-- **Documentation Content**: `.md` or `.mdx` files placed in `src/content/docs/` are exposed as
- routes based on their filenames.
-- **Static Assets**: Place images and other static files in `public/` for easy access.
+- **Documentation Content**: `.md` or `.mdx` files placed in `src/content/docs/`
+ are exposed as routes based on their filenames.
+- **Static Assets**: Place images and other static files in `public/` for easy
+ access.
## Commands
@@ -92,7 +97,8 @@ All commands should be run from the root of the project in the terminal:
## Getting Started
-To initialize the project, use the following Astro command to set up with the Starlight Starter Kit:
+To initialize the project, use the following Astro command to set up with the
+Starlight Starter Kit:
```shell
npm create astro@latest -- --template starlight
@@ -104,7 +110,9 @@ For more information, check out the [CONTRIBUTING.md](CONTRIBUTING.md) file.
For additional resources, check out the following:
-- **[Starlight Documentation](https://starlight.astro.build/)**: Learn more about the Starlight
- Starter Kit.
-- **[Astro Documentation](https://docs.astro.build)**: Understand the Astro framework.
-- **[Astro Discord Server](https://astro.build/chat)**: Connect with the Astro community.
+- **[Starlight Documentation](https://starlight.astro.build/)**: Learn more
+ about the Starlight Starter Kit.
+- **[Astro Documentation](https://docs.astro.build)**: Understand the Astro
+ framework.
+- **[Astro Discord Server](https://astro.build/chat)**: Connect with the Astro
+ community.
diff --git a/src/content/docs/Feedback/feedback-form.md b/src/content/docs/Feedback/feedback-form.md
index 6f3b45527..9ae26f282 100644
--- a/src/content/docs/Feedback/feedback-form.md
+++ b/src/content/docs/Feedback/feedback-form.md
@@ -5,24 +5,26 @@ description: A way to provide feedback
# We Value Your Feedback
-Thank you for taking the time to share your thoughts with us. Your feedback is crucial for helping
-us improve and ensuring we continue to provide the best possible experience.
+Thank you for taking the time to share your thoughts with us. Your feedback is
+crucial for helping us improve and ensuring we continue to provide the best
+possible experience.
## How to Provide Feedback
-You have several convenient options for submitting your feedback. Please choose the one that best
-suits your preference:
+You have several convenient options for submitting your feedback. Please choose
+the one that best suits your preference:
### Option 1: Fill Out Our Online Form
Open an issue in our
-[GitHub repository](https://github.com/thoth-tech/ThothTech-Documentation-Website/issues). It only
-takes a few minutes and helps us understand your perspective.
+[GitHub repository](https://github.com/thoth-tech/ThothTech-Documentation-Website/issues).
+It only takes a few minutes and helps us understand your perspective.
### Option 2: Contact Your Team Leader or Senior Students
-For specific feedback or concerns that require direct communication, please reach out to your team
-leader or senior students. This ensures your feedback is addressed promptly and personally.
+For specific feedback or concerns that require direct communication, please
+reach out to your team leader or senior students. This ensures your feedback is
+addressed promptly and personally.
---
@@ -30,12 +32,15 @@ leader or senior students. This ensures your feedback is addressed promptly and
Once you submit your feedback:
-- **Review:** Our team will review your feedback to understand your needs and concerns.
-- **Plan:** We will plan improvements based on the feedback received from you and other users.
-- **Implement:** We will strive to implement changes and improvements continuously to enhance your
- experience.
+- **Review:** Our team will review your feedback to understand your needs and
+ concerns.
+- **Plan:** We will plan improvements based on the feedback received from you
+ and other users.
+- **Implement:** We will strive to implement changes and improvements
+ continuously to enhance your experience.
-Your input is incredibly valuable to us, and we appreciate the effort you put into helping us
-improve. If you have any immediate concerns, please contact our company operations team.
+Your input is incredibly valuable to us, and we appreciate the effort you put
+into helping us improve. If you have any immediate concerns, please contact our
+company operations team.
Thank you for your contribution!
diff --git a/src/content/docs/Products/OnTrack/01-start-contributing.md b/src/content/docs/Products/OnTrack/01-start-contributing.md
index c8872dda8..68e974ea3 100644
--- a/src/content/docs/Products/OnTrack/01-start-contributing.md
+++ b/src/content/docs/Products/OnTrack/01-start-contributing.md
@@ -7,51 +7,59 @@ sidebar:
### Ontrack Libraries
-The OnTrack system consists of a [Ruby On Rails](https://rubyonrails.org/) backend using the
-[Grape API framework](https://github.com/ruby-grape/grape), and an
-[Angular 17](https://v17.angular.io/docs) and [TailWindCSS](https://tailwindcss.com/) frontend.
-Currently, the frontend is in the process of a migration from a [AngularJS](https://angularjs.org/)
-and [Bootstrap 3.4](https://getbootstrap.com/docs/3.4/) to this new structure.
+The OnTrack system consists of a [Ruby On Rails](https://rubyonrails.org/)
+backend using the [Grape API framework](https://github.com/ruby-grape/grape),
+and an [Angular 17](https://v17.angular.io/docs) and
+[TailWindCSS](https://tailwindcss.com/) frontend. Currently, the frontend is in
+the process of a migration from a [AngularJS](https://angularjs.org/) and
+[Bootstrap 3.4](https://getbootstrap.com/docs/3.4/) to this new structure.
### Version Control
-At ThothTech, we use Git as our version control system, with our repositories being stored on
-GitHub. This allows for easily collaboration and code storage for such a large and complex system.
-The OnTrack system is stored within 3 repositories:
+At ThothTech, we use Git as our version control system, with our repositories
+being stored on GitHub. This allows for easily collaboration and code storage
+for such a large and complex system. The OnTrack system is stored within 3
+repositories:
-- [doubtfire-web](https://github.com/thoth-tech/doubtfire-web) contains the frontend
-- [doubtfire-api](https://github.com/thoth-tech/doubtfire-api) contains the backend
-- [doubtfire-deploy](https://github.com/thoth-tech/doubtfire-deploy) is used to manage deployments
- and releases
+- [doubtfire-web](https://github.com/thoth-tech/doubtfire-web) contains the
+ frontend
+- [doubtfire-api](https://github.com/thoth-tech/doubtfire-api) contains the
+ backend
+- [doubtfire-deploy](https://github.com/thoth-tech/doubtfire-deploy) is used to
+ manage deployments and releases
-While working on the OnTrack system, you will mainly be operating within the doubtfire-web and
-doubtfire-api repositories.
+While working on the OnTrack system, you will mainly be operating within the
+doubtfire-web and doubtfire-api repositories.
-These 3 linked repositories are owned by ThothTech and are the ones that you will fork to make your
-changes. When your changes are approved, they will be merged into these repositories and can later
-be merged upstream into the [doubtfire-lms](https://github.com/doubtfire-lms) series of
-repositories. The doubtfire-lms repositories contain all changes that have been accepted.
+These 3 linked repositories are owned by ThothTech and are the ones that you
+will fork to make your changes. When your changes are approved, they will be
+merged into these repositories and can later be merged upstream into the
+[doubtfire-lms](https://github.com/doubtfire-lms) series of repositories. The
+doubtfire-lms repositories contain all changes that have been accepted.
### Development Container
-While working with the OnTrack system, we use a development container. This container includes all
-the previously listed repositories under one download, and enables you to easily run the OnTrack
-system. Details on how to set up and run the development container can be found in the
+While working with the OnTrack system, we use a development container. This
+container includes all the previously listed repositories under one download,
+and enables you to easily run the OnTrack system. Details on how to set up and
+run the development container can be found in the
[Doubtfire contributing guide](https://github.com/thoth-tech/doubtfire-deploy/blob/development/CONTRIBUTING.md).
#### Running the Dev Container
-To run the dev container, first you must get the **Docker Engine** running and open the
-**doubtfire-deploy** folder in a dev container using VS Code's Command Palette (ctrl / cmd + shift +
-p) command `Dev Containers: Open Folder in Container...`
+To run the dev container, first you must get the **Docker Engine** running and
+open the **doubtfire-deploy** folder in a dev container using VS Code's Command
+Palette (ctrl / cmd + shift + p) command
+`Dev Containers: Open Folder in Container...`
-Make sure the branch is development and the git remotes are configured properly. It should look
-something like this for all 3 repos (doubtfire-deploy, doubtfire-web, doubtfire-api):
+Make sure the branch is development and the git remotes are configured properly.
+It should look something like this for all 3 repos (doubtfire-deploy,
+doubtfire-web, doubtfire-api):

-**Origin** should point to the **fork that you have created** and **upstream** should point to the
-**Thoth Tech** repo
+**Origin** should point to the **fork that you have created** and **upstream**
+should point to the **Thoth Tech** repo
#### Common errors during dev container setup
@@ -62,76 +70,84 @@ something like this for all 3 repos (doubtfire-deploy, doubtfire-web, doubtfire-
###### Solution
Dev container related files are only located in the **development** branch under
-**_.devcontainer_**. This means if you are on the wrong branch or your container doesn't recognise
-the git repository, it won't find the dev container files and your container won't be configured
-properly (your Configuring... terminal would show an error like shown below e.g. post_start.sh was
-not found.)
+**_.devcontainer_**. This means if you are on the wrong branch or your container
+doesn't recognise the git repository, it won't find the dev container files and
+your container won't be configured properly (your Configuring... terminal would
+show an error like shown below e.g. post_start.sh was not found.)

-On Windows, your Docker workspace won't recognise the git repository if you don't configure git to
-mark the workspace as a safe directory. If you do a `git fetch` like shown below, git will alert you
-of this and tell you which command to run to fix it. Copy paste that command and run it. After that,
-you'll see the branch name in parenthesis as shown below in red. Do this for all 3 repos to mark
-each directory as safe for git operations.
+On Windows, your Docker workspace won't recognise the git repository if you
+don't configure git to mark the workspace as a safe directory. If you do a
+`git fetch` like shown below, git will alert you of this and tell you which
+command to run to fix it. Copy paste that command and run it. After that, you'll
+see the branch name in parenthesis as shown below in red. Do this for all 3
+repos to mark each directory as safe for git operations.

-**Red** means there is a problem with your branch. Ideally the branch name should be in **green**.
-To fix this, run `git reset --hard upstream/development` for each repo. This is assuming you have
-fetched the latest changes for each repo using `git fetch` and you are on the development branch.
-Run `git status` to make sure everything's up to date and there are no pending changes.
+**Red** means there is a problem with your branch. Ideally the branch name
+should be in **green**. To fix this, run `git reset --hard upstream/development`
+for each repo. This is assuming you have fetched the latest changes for each
+repo using `git fetch` and you are on the development branch. Run `git status`
+to make sure everything's up to date and there are no pending changes.
-Then without closing the remote connection to the container, rebuild the container using the Command
-Palette command (ctrl / cmd + shift + p) `Dev Containers: Rebuild Container`.
+Then without closing the remote connection to the container, rebuild the
+container using the Command Palette command (ctrl / cmd + shift + p)
+`Dev Containers: Rebuild Container`.
##### 2. Docker container related errors
Firstly, make sure to check for pending updates for Docker.
-Now, sometimes if you are having any weird Docker container related problems such as shown below,
-you might need to delete all **Containers, Images, and Volumes** related to
-**doubtfire-lms/formatif** from inside the **Docker app** and restart the **Docker Engine**.
+Now, sometimes if you are having any weird Docker container related problems
+such as shown below, you might need to delete all **Containers, Images, and
+Volumes** related to **doubtfire-lms/formatif** from inside the **Docker app**
+and restart the **Docker Engine**.

-After you have restarted the docker engine, you can then open the **doubtfire-deploy** folder in VS
-Code using the Command Palette command Dev Containers: Open Folder in Container...
+After you have restarted the docker engine, you can then open the
+**doubtfire-deploy** folder in VS Code using the Command Palette command Dev
+Containers: Open Folder in Container...
##### 3. Frontend not starting
-Typically, if the frontend fails to start it means that you are missing the required packages. This
-can be resolved by opening a new terminal in the **doubtfire-web** directory and running the command
-`npm install -f`. Once this command has finished running, run the command `npm start` to start the
-frontend, which can usually be accessed at .
+Typically, if the frontend fails to start it means that you are missing the
+required packages. This can be resolved by opening a new terminal in the
+**doubtfire-web** directory and running the command `npm install -f`. Once this
+command has finished running, run the command `npm start` to start the frontend,
+which can usually be accessed at localhost:4200.
##### 4. Wrong sign in page / frontend errors
-Errors that are encountered in the frontend typically show up as alerts that appear at the top right
-of the page.
+Errors that are encountered in the frontend typically show up as alerts that
+appear at the top right of the page.

-Most of the time these will occur due to the backend not working as a result of a migration error,
-which an example of is shown below:
+Most of the time these will occur due to the backend not working as a result of
+a migration error, which an example of is shown below:

-To fix this, enter the backend terminal and press **ctrl + c** to end the backend process. Once this
-process has ended, create a new terminal in the doubtfire-api directory. Run the following commands
-to fix the migration error and relaunch the backend:
+To fix this, enter the backend terminal and press **ctrl + c** to end the
+backend process. Once this process has ended, create a new terminal in the
+doubtfire-api directory. Run the following commands to fix the migration error
+and relaunch the backend:
1. `bundle exec rake db:migrate`
2. `bundle exec rake db:populate`
3. `rails s`
-Reload the frontend once the backend server is up and running. The correct sign-in page like shown
-below should appear.
+Reload the frontend once the backend server is up and running. The correct
+sign-in page like shown below should appear.

-Login with username **student_1** and password **password** and enter **1** as the Student ID/number
-and sign-in. The dashboard should appear with the enrolled units.
+Login with username **student_1** and password **password** and enter **1** as
+the Student ID/number and sign-in. The dashboard should appear with the enrolled
+units.
### Documentation and Templates
@@ -139,16 +155,18 @@ and sign-in. The dashboard should appear with the enrolled units.
Currently, ThothTech has 2 documentation repositories:
-- [ThothTech/documentation](https://github.com/thoth-tech/documentation): For internal docs such as
- new features being worked on and spike reports etc.
+- [ThothTech/documentation](https://github.com/thoth-tech/documentation): For
+ internal docs such as new features being worked on and spike reports etc.
- [ThothTech/ThothTech-Documentation-Website](https://github.com/thoth-tech/ThothTech-Documentation-Website):
- For external docs to introduce the products and policies to new students and assist onboarding.
+ For external docs to introduce the products and policies to new students and
+ assist onboarding.
#### Templates
-While working on the system, you will often be required to write documentation prior to making
-changes, such as component reviews for a migration, a spike when performing research or even for
-submitting a pull request. Templates for each of these can be found below:
+While working on the system, you will often be required to write documentation
+prior to making changes, such as component reviews for a migration, a spike when
+performing research or even for submitting a pull request. Templates for each of
+these can be found below:
- [Component Review Template](https://github.com/thoth-tech/documentation/blob/main/docs/Templates/Project-Templates/Component-Review.md)
- [Spike Plan Template](https://github.com/thoth-tech/documentation/blob/main/docs/Templates/SpikePlan-Template.md)
@@ -156,20 +174,23 @@ submitting a pull request. Templates for each of these can be found below:
### Your First Task
-Currently, the most beginner friendly tasks within the OnTrack system are frontend migrations. These
-tasks will involve converting an old component that uses [AngularJS](https://angularjs.org/) and
+Currently, the most beginner friendly tasks within the OnTrack system are
+frontend migrations. These tasks will involve converting an old component that
+uses [AngularJS](https://angularjs.org/) and
[Bootstrap 3.4](https://getbootstrap.com/docs/3.4/) to a new component that uses
-[Angular 17](https://v17.angular.io/docs) and [TailWindCSS](https://tailwindcss.com/). The guide for
-how to perform a frontend migration can be found
+[Angular 17](https://v17.angular.io/docs) and
+[TailWindCSS](https://tailwindcss.com/). The guide for how to perform a frontend
+migration can be found
[here](https://github.com/thoth-tech/doubtfire-web/blob/development/MIGRATION-GUIDE.md).
-These migration tasks will give you a good understanding of how the OnTrack system is structured and
-will provide a good introduction on how to make changes to the system.
+These migration tasks will give you a good understanding of how the OnTrack
+system is structured and will provide a good introduction on how to make changes
+to the system.
Here's some resources which might help while working on migration tasks:
- [Flex layout to tailwind migrations](https://blogs.halodoc.io/flex-layout-to-tailwind-migration/)
- [Flex directive equivalents in Tailwind](https://github.com/angular/flex-layout/issues/1426#issuecomment-1302184078)
-As always, if you run into any issues while working on OnTrack, feel free to reach out to others in
-the team. We're always happy to help!
+As always, if you run into any issues while working on OnTrack, feel free to
+reach out to others in the team. We're always happy to help!
diff --git a/src/content/docs/Products/OnTrack/02-set-up-dev.mdx b/src/content/docs/Products/OnTrack/02-set-up-dev.mdx
index 7ec4675e7..9cd76d524 100644
--- a/src/content/docs/Products/OnTrack/02-set-up-dev.mdx
+++ b/src/content/docs/Products/OnTrack/02-set-up-dev.mdx
@@ -5,14 +5,15 @@ sidebar:
order: 2
---
-Welcome to the comprehensive guide for setting up the OnTrack (Doubtfire) Development Environment.
-Follow these steps carefully to ensure a smooth configuration.
+Welcome to the comprehensive guide for setting up the OnTrack (Doubtfire)
+Development Environment. Follow these steps carefully to ensure a smooth
+configuration.
## Step 1: Overview
-In this tutorial, we will set up the OnTrack (Doubtfire) Development Environment. Ensure you are
-familiar with the repositories, as you’ll be working with the Thoth-Tech versions, not the original
-Doubtfire LMS repositories.
+In this tutorial, we will set up the OnTrack (Doubtfire) Development
+Environment. Ensure you are familiar with the repositories, as you’ll be working
+with the Thoth-Tech versions, not the original Doubtfire LMS repositories.
---
@@ -29,8 +30,8 @@ Before proceeding, ensure the following are installed and set up:
## Step 3: Restart if Needed
-If you have faced errors in previous setup attempts, start from scratch. Follow this tutorial
-step-by-step to avoid missing any crucial details.
+If you have faced errors in previous setup attempts, start from scratch. Follow
+this tutorial step-by-step to avoid missing any crucial details.
---
@@ -50,8 +51,8 @@ Make sure you are using the Thoth-Tech versions of these repositories.
1. Open the `doubtfire-deploy` repository under Thoth-Tech.
2. Click on the **Fork** button in the top-right corner.
-3. **Important**: Untick the checkbox for "Copy the main branch only" to include the development
- branch.
+3. **Important**: Untick the checkbox for "Copy the main branch only" to include
+ the development branch.
4. Click **Create Fork**.
---
@@ -63,7 +64,8 @@ Repeat the forking process for these repositories:
- **doubtfire-api**
- **doubtfire-web**
-Ensure the development branch is included by unticking "Copy the main branch only".
+Ensure the development branch is included by unticking "Copy the main branch
+only".
---
@@ -72,7 +74,8 @@ Ensure the development branch is included by unticking "Copy the main branch onl
## Step 7: Open the Terminal and Navigate to Your Folder
1. Open a terminal (Command Prompt, PowerShell, or Terminal).
-2. Use the `cd` command to navigate to the folder where you want to store the repository. Example:
+2. Use the `cd` command to navigate to the folder where you want to store the
+ repository. Example:
```bash
cd dev
@@ -98,8 +101,9 @@ git clone --recurse-submodules https://github.com/aditya993388/doubtfire-deploy.
## Step 9: Wait for the Cloning Process
-The terminal will display the cloning progress for the repository and its submodules (e.g.,
-`doubtfire-api`, `doubtfire-overseer`, `doubtfire-web`). Wait until the process completes.
+The terminal will display the cloning progress for the repository and its
+submodules (e.g., `doubtfire-api`, `doubtfire-overseer`, `doubtfire-web`). Wait
+until the process completes.
---
@@ -135,7 +139,8 @@ Add the upstream remote to point to the Thoth-Tech repository:
git remote add upstream https://github.com/thoth-tech/doubtfire-deploy.git
```
-This ensures that upstream points to the Thoth-Tech version, not the original LMS repository.
+This ensures that upstream points to the Thoth-Tech version, not the original
+LMS repository.
---
@@ -160,7 +165,8 @@ Ensure both `origin` and `upstream` are listed correctly:
## Step 14: Switch to the Development Branch
-By default, the repository is on the `main` branch. Switch to the `development` branch:
+By default, the repository is on the `main` branch. Switch to the `development`
+branch:
```bash
git switch development
@@ -210,7 +216,8 @@ The `*` symbol next to `development` confirms you are on the correct branch.
### Step 18: Verify the Upstream Remote
-- Run the following command again to confirm the upstream remote is configured correctly:
+- Run the following command again to confirm the upstream remote is configured
+ correctly:
```bash
git remote -v
@@ -230,7 +237,8 @@ The `*` symbol next to `development` confirms you are on the correct branch.
git remote set-url origin https://github.com/aditya993388/doubtfire-web.git
```
- - This ensures that all your changes and pushes will go to your forked repository.
+ - This ensures that all your changes and pushes will go to your forked
+ repository.
---
@@ -256,7 +264,8 @@ The `*` symbol next to `development` confirms you are on the correct branch.
### Step 21: Switch to the Development Branch for doubtfire-web
-- Ensure you are working on the correct branch by switching to the `development` branch:
+- Ensure you are working on the correct branch by switching to the `development`
+ branch:
```bash
git switch development
```
@@ -270,8 +279,8 @@ The `*` symbol next to `development` confirms you are on the correct branch.
### Step 22: Pull the Latest Updates for `doubtfire-web`
-- After switching to the `development` branch, pull the latest updates from the repository to ensure
- you have the most recent changes:
+- After switching to the `development` branch, pull the latest updates from the
+ repository to ensure you have the most recent changes:
```bash
git pull
@@ -332,11 +341,13 @@ The `*` symbol next to `development` confirms you are on the correct branch.
- At this stage, only the `origin` remote is configured.
-- Update the `origin` to point to your version of the `doubtfire-api` repository:
+- Update the `origin` to point to your version of the `doubtfire-api`
+ repository:
```bash
git remote set-url origin https://github.com/aditya993388/doubtfire-api.git
```
-- Add the `upstream` remote to point to the Thoth-Tech version of the `doubtfire-api` repository:
+- Add the `upstream` remote to point to the Thoth-Tech version of the
+ `doubtfire-api` repository:
```bash
git remote add upstream https://github.com/thoth-tech/doubtfire-api.git
```
@@ -346,7 +357,8 @@ The `*` symbol next to `development` confirms you are on the correct branch.
git remote -v
```
- - Ensure `origin` points to your repository and `upstream` points to Thoth-Tech's repository.
+ - Ensure `origin` points to your repository and `upstream` points to
+ Thoth-Tech's repository.
---
@@ -355,10 +367,11 @@ The `*` symbol next to `development` confirms you are on the correct branch.
### Step 25: Accessing Dev Containers via VS Code
- Exit the terminal and open Visual Studio Code (VS Code).
-- Press `Command + Shift + P` (on Mac) or `Ctrl + Shift + P` (on Windows/Linux) to open the VS Code
- Command Panel.
+- Press `Command + Shift + P` (on Mac) or `Ctrl + Shift + P` (on Windows/Linux)
+ to open the VS Code Command Panel.
- In the Command Panel, search for **Dev Containers**.
-- Select the option **Dev Containers: Open Folder in Container...** from the list.
+- Select the option **Dev Containers: Open Folder in Container...** from the
+ list.
- Before proceeding, ensure that Docker is running in the background.

@@ -368,11 +381,12 @@ The `*` symbol next to `development` confirms you are on the correct branch.
### Step 26: Reading Dev Container Configuration
-- Once the folder opens in the container, look for the **Reading Dev Container Configuration (show
- log)** message in the bottom-right corner of VS Code.
-- Click on **(show log)** to view the progress and ensure everything is being set up correctly.
-- If it’s your first time setting up the container, expect the process to take some time as it
- configures all necessary components.
+- Once the folder opens in the container, look for the **Reading Dev Container
+ Configuration (show log)** message in the bottom-right corner of VS Code.
+- Click on **(show log)** to view the progress and ensure everything is being
+ set up correctly.
+- If it’s your first time setting up the container, expect the process to take
+ some time as it configures all necessary components.
---
@@ -380,13 +394,13 @@ The `*` symbol next to `development` confirms you are on the correct branch.
### Step 27: Running the Front-End and Back-End Applications
-- At this stage, the front-end application is running on the left terminal, while the back-end
- (server) is active on the right terminal.
-- Note: It's common to encounter errors during the first run of the front-end. These may relate to
- missing dependencies or configuration conflicts.
-- Review the error messages carefully. For issues like dependency conflicts, try resolving them with
- appropriate commands such as `npm install` or check the error log paths mentioned in the terminal
- for further details.
+- At this stage, the front-end application is running on the left terminal,
+ while the back-end (server) is active on the right terminal.
+- Note: It's common to encounter errors during the first run of the front-end.
+ These may relate to missing dependencies or configuration conflicts.
+- Review the error messages carefully. For issues like dependency conflicts, try
+ resolving them with appropriate commands such as `npm install` or check the
+ error log paths mentioned in the terminal for further details.
---
@@ -395,11 +409,12 @@ The `*` symbol next to `development` confirms you are on the correct branch.
### Step 28: Navigating to `doubtfire-web` and Resolving Dependencies
- Open a new terminal window in your development environment.
-- Use the command `cd doubtfire-web` to navigate into the `doubtfire-web` directory.
-- Run the command `npm install -f` to forcefully install and resolve any missing dependencies for
- the front-end application.
- - This command ensures that all required packages are installed and any conflicting dependencies
- are resolved.
+- Use the command `cd doubtfire-web` to navigate into the `doubtfire-web`
+ directory.
+- Run the command `npm install -f` to forcefully install and resolve any missing
+ dependencies for the front-end application.
+ - This command ensures that all required packages are installed and any
+ conflicting dependencies are resolved.
---
@@ -407,18 +422,20 @@ The `*` symbol next to `development` confirms you are on the correct branch.
### Step 29: Run the Frontend
-- Run the frontend with npm: Open a terminal in the `doubtfire-web` directory and run:
+- Run the frontend with npm: Open a terminal in the `doubtfire-web` directory
+ and run:
```bash
npm start
```
- - This will build and run the frontend application. If successful, it will host the application
- locally on port `4200`. 
+ - This will build and run the frontend application. If successful, it will
+ host the application locally on port `4200`.
+ 
-- Confirmation Notification: Once the frontend is running, you will see a notification in the
- bottom-right corner of your Visual Studio Code window indicating that the application is available
- on `localhost:4200`.
+- Confirmation Notification: Once the frontend is running, you will see a
+ notification in the bottom-right corner of your Visual Studio Code window
+ indicating that the application is available on `localhost:4200`.
---
@@ -426,15 +443,16 @@ The `*` symbol next to `development` confirms you are on the correct branch.
### Step 30: Verify the Running Application
-- Open the **Ports Panel** in Visual Studio Code to see all forwarded ports and their statuses.
+- Open the **Ports Panel** in Visual Studio Code to see all forwarded ports and
+ their statuses.
- Locate the forwarded address for port `4200`.
-- Hover over the address and click on the globe icon to open the application in your default
- browser.
+- Hover over the address and click on the globe icon to open the application in
+ your default browser.

-- If everything is configured correctly, the OnTrack application login page will open in your
- browser.
+- If everything is configured correctly, the OnTrack application login page will
+ open in your browser.
---
@@ -449,7 +467,8 @@ The `*` symbol next to `development` confirms you are on the correct branch.
- Use the default credentials provided in the `CONTRIBUTING.md` file. Example:
- **Username**: `student_1`
- **Password**: Type any placeholder password as this is just for testing.
-- Enter the credentials on the login page and click **Sign In** to log in successfully.
+- Enter the credentials on the login page and click **Sign In** to log in
+ successfully.
---
@@ -461,18 +480,22 @@ The `*` symbol next to `development` confirms you are on the correct branch.
### Step 32: Exploring the Dashboard
-- After logging in, you’ll be directed to the dashboard showing **Enrolled Units**. Examples:
+- After logging in, you’ll be directed to the dashboard showing **Enrolled
+ Units**. Examples:
- Introduction to Programming
- Object-Oriented Programming
- Artificial Intelligence for Games
- Game Programming
- Interact with the dashboard by:
- - Clicking on any enrolled unit to view details such as assignments and grades.
- - Using the "Select Unit" dropdown in the top-left corner to switch between units.
+ - Clicking on any enrolled unit to view details such as assignments and
+ grades.
+ - Using the "Select Unit" dropdown in the top-left corner to switch between
+ units.
- **Progress Dashboard**:
- - Inside each unit, review individual assignments (e.g., Pass Tasks, Distinction Tasks).
+ - Inside each unit, review individual assignments (e.g., Pass Tasks,
+ Distinction Tasks).
- Analyze your performance using tools like the **Burndown Chart**.
---
diff --git a/src/content/docs/Products/OnTrack/03-planner-board.md b/src/content/docs/Products/OnTrack/03-planner-board.md
index a1f25795c..76d5a9f6e 100644
--- a/src/content/docs/Products/OnTrack/03-planner-board.md
+++ b/src/content/docs/Products/OnTrack/03-planner-board.md
@@ -5,18 +5,19 @@ sidebar:
order: 3
---
-After reviewing the components in the doubtfire-web repository, the next step is to access the
-OnTrack Planner Board to begin working on ongoing tickets. The planner board is used to manage and
-track development tasks, including the migration process, feature additions, bug fixes, and other
-project-related work.
+After reviewing the components in the doubtfire-web repository, the next step is
+to access the OnTrack Planner Board to begin working on ongoing tickets. The
+planner board is used to manage and track development tasks, including the
+migration process, feature additions, bug fixes, and other project-related work.
## Steps to Access and Work on Tickets
### 1. Access the Planner Board
- Open the OnTrack Planner Board in your web browser.
-- The board is typically hosted on a project management tool like Jira, Trello, or GitHub Projects.
- The link to the board should be provided in the project documentation or by the team.
+- The board is typically hosted on a project management tool like Jira, Trello,
+ or GitHub Projects. The link to the board should be provided in the project
+ documentation or by the team.
@@ -24,27 +25,30 @@ project-related work.
### 2. Navigate to the "Ongoing Tickets" Section
-- On the planner board, locate the "Ongoing" or "In Progress" column, where all active tickets are
- listed.
+- On the planner board, locate the "Ongoing" or "In Progress" column, where all
+ active tickets are listed.
### 3. Review Ticket Details
- Select a ticket assigned to you or one that matches your skill set.
- Each ticket will have the following details:
- - **Title**: Describes the task (e.g., "Migrate alignment-bar-chart component to TypeScript").
+ - **Title**: Describes the task (e.g., "Migrate alignment-bar-chart component
+ to TypeScript").
- **Description**: Provides context, goals, and steps for the task.
- - **Acceptance Criteria**: Lists what needs to be completed for the ticket to be considered done.
+ - **Acceptance Criteria**: Lists what needs to be completed for the ticket to
+ be considered done.
- **Priority**: Indicates the urgency (e.g., High, Medium, Low).
- **Status**: Current progress (e.g., To Do, In Progress, Code Review).
### 4. Assign the Ticket
-- If a ticket isn’t already assigned, assign it to yourself or notify the team lead.
+- If a ticket isn’t already assigned, assign it to yourself or notify the team
+ lead.
### 5. Start Working on the Ticket
-- Clone or pull the latest version of the repository associated with the ticket (e.g., doubtfire-web
- or doubtfire-api).
+- Clone or pull the latest version of the repository associated with the ticket
+ (e.g., doubtfire-web or doubtfire-api).
- Create a new branch for the ticket using a standard naming convention (e.g.,
feature/migrate-alignment-bar-chart).
- Begin working on the task following the migration or development guidelines.
@@ -55,14 +59,15 @@ project-related work.
## Best Practices for Ticket Workflow
-- **Document Your Progress**: Add notes or comments to the ticket detailing your approach,
- challenges, and updates.
-- **Collaborate with the Team**: If you encounter any blockers, reach out to your team via Slack,
- Teams, or other communication tools.
-- **Follow Git Workflow**: Ensure you follow proper Git practices, such as rebasing, resolving
- conflicts, and submitting pull requests (PRs) for code review.
-- **Submit for Review**: Once the task is complete, push your branch and create a pull request
- linked to the ticket. Notify the reviewers for feedback.
+- **Document Your Progress**: Add notes or comments to the ticket detailing your
+ approach, challenges, and updates.
+- **Collaborate with the Team**: If you encounter any blockers, reach out to
+ your team via Slack, Teams, or other communication tools.
+- **Follow Git Workflow**: Ensure you follow proper Git practices, such as
+ rebasing, resolving conflicts, and submitting pull requests (PRs) for code
+ review.
+- **Submit for Review**: Once the task is complete, push your branch and create
+ a pull request linked to the ticket. Notify the reviewers for feedback.
@@ -75,12 +80,13 @@ project-related work.
## To start work on a new feature
1. Make your changes in locally
-2. Create a draft Pull Request and document the change you are working on. Doing this early will
- make sure that you get feedback on your work quickly.
-3. Complete your work, pushing to your fork's feature branch. This will update your existing PR (no
- need to create new PRs)
-4. Update the status of your PR removing the draft status, and flag someone in the Core team to
- review and incorporate your work.
-5. Address any changes required. Pushing new commits to your branch to update the PR as needed.
-6. Once your PR is merged you can delete your feature branch and repeat this process for new
- features...
+2. Create a draft Pull Request and document the change you are working on. Doing
+ this early will make sure that you get feedback on your work quickly.
+3. Complete your work, pushing to your fork's feature branch. This will update
+ your existing PR (no need to create new PRs)
+4. Update the status of your PR removing the draft status, and flag someone in
+ the Core team to review and incorporate your work.
+5. Address any changes required. Pushing new commits to your branch to update
+ the PR as needed.
+6. Once your PR is merged you can delete your feature branch and repeat this
+ process for new features...
diff --git a/src/content/docs/Products/OnTrack/04-planner-board-etiquette.mdx b/src/content/docs/Products/OnTrack/04-planner-board-etiquette.mdx
index 26b54a7be..e1a7a753c 100644
--- a/src/content/docs/Products/OnTrack/04-planner-board-etiquette.mdx
+++ b/src/content/docs/Products/OnTrack/04-planner-board-etiquette.mdx
@@ -10,60 +10,64 @@ import { Steps } from "@astrojs/starlight/components";
## Proper Planner Board Etiquete
-The planner board is where all tasks are tracked. You can find tasks to claim and work on, or add
-your own tasks that you will complete. Here are some guidelines to ensure smooth teamwork and
-efficient use of the planner board.
+The planner board is where all tasks are tracked. You can find tasks to claim
+and work on, or add your own tasks that you will complete. Here are some
+guidelines to ensure smooth teamwork and efficient use of the planner board.
1. ### Claiming a Task
- **Commit to work:** Only claim a task if you are ready to work on it.
- - **Unclaim if needed:** If you are unable to proceed with a task you've claimed, unclaim it so
- others can take over.
- - **Update status:** Once you claim a task, move it to the "Doing" column to signal that it's
- being actively worked on.
+ - **Unclaim if needed:** If you are unable to proceed with a task you've
+ claimed, unclaim it so others can take over.
+ - **Update status:** Once you claim a task, move it to the "Doing" column to
+ signal that it's being actively worked on.
2. ### Adding a Task
- - **Be clear and concise:** When adding a task, provide a meaningful title and a detailed
- description.
- - **Add checklists:** If the task involves multiple steps, include a checklist to outline them
- clearly.
- - **Use appropriate tags:** Tag the task with relevant labels to categorise it properly, such as
- `Tutorials` if it's tutorial based, or `usage examples` if it's a usage example.
+ - **Be clear and concise:** When adding a task, provide a meaningful title
+ and a detailed description.
+ - **Add checklists:** If the task involves multiple steps, include a
+ checklist to outline them clearly.
+ - **Use appropriate tags:** Tag the task with relevant labels to categorise
+ it properly, such as `Tutorials` if it's tutorial based, or
+ `usage examples` if it's a usage example.
3. ### Moving Tasks
- - **Include relevant links:** When completing a task, attach links to the pull request (PR) and
- any other relevant information.
- - **Add a completion comment:** Leave a comment on the task card with the date you completed the
- task.
- - **Move to Peer Review:** After completing a task, move it to the "First Peer Review" column so
- a team member can review it.
+ - **Include relevant links:** When completing a task, attach links to the
+ pull request (PR) and any other relevant information.
+ - **Add a completion comment:** Leave a comment on the task card with the
+ date you completed the task.
+ - **Move to Peer Review:** After completing a task, move it to the "First
+ Peer Review" column so a team member can review it.
> **Need help with pull requests?**
- > Follow the [How to Create a Pull Request](/products/splashkit/04-pull-request) guide for
- > detailed instructions.
+ > Follow the
+ > [How to Create a Pull Request](/products/splashkit/04-pull-request) guide
+ > for detailed instructions.
4. ### First Peer Review
- **Follow the review process:** Adhere to the steps outlined in the
[Peer Review Guide](/products/splashkit/06-peer-review).
- - **Request changes if needed:** Provide feedback and request changes if required.
- - **Approval:** Once the task meets the standards, approve it and the PR, then move the task to
- the "Second Peer Review" column.
- - **Leave a comment:** Add a comment with the date and confirmation that you've approved the
- task.
+ - **Request changes if needed:** Provide feedback and request changes if
+ required.
+ - **Approval:** Once the task meets the standards, approve it and the PR,
+ then move the task to the "Second Peer Review" column.
+ - **Leave a comment:** Add a comment with the date and confirmation that
+ you've approved the task.
5. ### Second Peer Review
- - **Follow similar steps:** Conduct the second peer review following the same guidelines as the
- first.
- - **Mentor Review:** After approving the PR, move it to the appropriate "Mentor Review" column.
- - **Comment on approval:** As before, leave a comment with the date and a note indicating you've
- approved the task for mentor review.
+ - **Follow similar steps:** Conduct the second peer review following the same
+ guidelines as the first.
+ - **Mentor Review:** After approving the PR, move it to the appropriate
+ "Mentor Review" column.
+ - **Comment on approval:** As before, leave a comment with the date and a
+ note indicating you've approved the task for mentor review.
6. ### Mentor Review
- **Final review:** The mentor will review the task and provide feedback.
- - **Request changes:** If changes are needed, the mentor will request them and move the task back
- to the "doing" column.
- - **Approval:** Once the mentor approves the task, they will merge the PR and move the task to
- the "completed" column.
+ - **Request changes:** If changes are needed, the mentor will request them
+ and move the task back to the "doing" column.
+ - **Approval:** Once the mentor approves the task, they will merge the PR and
+ move the task to the "completed" column.
diff --git a/src/content/docs/Products/OnTrack/05-planner-board-guidelines.mdx b/src/content/docs/Products/OnTrack/05-planner-board-guidelines.mdx
index 008cfe578..1614db770 100644
--- a/src/content/docs/Products/OnTrack/05-planner-board-guidelines.mdx
+++ b/src/content/docs/Products/OnTrack/05-planner-board-guidelines.mdx
@@ -1,6 +1,7 @@
---
title: Planner Board Guidelines
-description: A guide on how to effectively use Planner Boards and Agile Cards in OnTrack.
+description:
+ A guide on how to effectively use Planner Boards and Agile Cards in OnTrack.
sidebar:
label: "- Guidelines"
order: 5
@@ -10,9 +11,10 @@ import { Steps } from "@astrojs/starlight/components";
## Overview
-Planner Boards are a vital part of managing tasks and ensuring smooth workflow in OnTrack projects.
-This guide will walk you through how to work with Planner Boards, understand sprints, update task
-cards, and maintain clarity on the importance of upstream reviews.
+Planner Boards are a vital part of managing tasks and ensuring smooth workflow
+in OnTrack projects. This guide will walk you through how to work with Planner
+Boards, understand sprints, update task cards, and maintain clarity on the
+importance of upstream reviews.

@@ -27,31 +29,35 @@ Planner Boards play a crucial role in:
## How to Work with Planner Boards
-Planner Boards are used to visually track the status of tasks throughout a sprint. Here’s how you
-can work with them effectively:
+Planner Boards are used to visually track the status of tasks throughout a
+sprint. Here’s how you can work with them effectively:
-- **Columns**: Typically, boards have columns like _To Do_, _In Progress_, _Review_, and _Done_.
- Each task card moves from left to right as work progresses.
-- **Task Cards**: Represent individual tasks or stories. Cards include details like task
- description, deadlines, and assignees.
-- **Agile Focus**: Use the Planner Board to reflect the Agile methodology by splitting larger
- stories into smaller, actionable tasks.
+- **Columns**: Typically, boards have columns like _To Do_, _In Progress_,
+ _Review_, and _Done_. Each task card moves from left to right as work
+ progresses.
+- **Task Cards**: Represent individual tasks or stories. Cards include details
+ like task description, deadlines, and assignees.
+- **Agile Focus**: Use the Planner Board to reflect the Agile methodology by
+ splitting larger stories into smaller, actionable tasks.
- Open the Planner Board for your sprint. Click on Add Task and provide a clear
- title and description. Assign the task to a team member and set a due date. 
+ Open the Planner Board for your sprint. Click on Add Task{" "}
+ and provide a clear title and description. Assign the task to a team
+ member and set a due date. 
- Drag and drop task cards to the appropriate column (*To Do*, *In Progress*, *Review*, *Done*).
- Update the card’s status whenever significant progress is made. 
- Use Agile Cards to split work into smaller tasks for better sprint management. Add clear
- labels (e.g., *Bug Fix*, *Feature*, *Testing*) to indicate the nature of the task.
+ Use Agile Cards to split work into smaller tasks for better sprint
+ management. Add clear labels (e.g., *Bug Fix*, *Feature*, *Testing*) to
+ indicate the nature of the task.
@@ -60,16 +66,18 @@ can work with them effectively:
## How Sprints Work and Their Timelines
-Sprints are short, time-boxed periods where a specific set of tasks is completed. OnTrack follows
-Agile principles to organize work into sprints.
+Sprints are short, time-boxed periods where a specific set of tasks is
+completed. OnTrack follows Agile principles to organize work into sprints.

### Key Points About Sprints
- **Sprint Duration**: Usually 1–2 weeks, depending on project complexity.
-- **Sprint Planning**: Conduct a meeting before each sprint to assign tasks and prioritize work.
-- **Sprint Review**: At the end of a sprint, review completed work and gather feedback.
+- **Sprint Planning**: Conduct a meeting before each sprint to assign tasks and
+ prioritize work.
+- **Sprint Review**: At the end of a sprint, review completed work and gather
+ feedback.
### Sprint Workflow on Planner Boards
@@ -89,41 +97,48 @@ Agile principles to organize work into sprints.
## How to Update Task Cards
-Keeping task cards up-to-date is essential to ensure transparency and progress tracking.
+Keeping task cards up-to-date is essential to ensure transparency and progress
+tracking.
### Steps to Update a Task Card:
1. Open the task card you want to update.
2. Edit the following fields if necessary:
- - **Status**: Update to reflect the current phase (e.g., _In Progress_, _Review_).
+ - **Status**: Update to reflect the current phase (e.g., _In Progress_,
+ _Review_).
- **Assignee**: Reassign the task if the original assignee changes.
- **Description**: Add additional details or changes made.
- - **Attachments**: Upload relevant files, like designs, test cases, or documentation.
+ - **Attachments**: Upload relevant files, like designs, test cases, or
+ documentation.
3. Save the changes and notify team members if updates require their attention.
---
## Reducing Confusion: Why and How to Update the Planner Board
-Proper use of the Planner Board eliminates misunderstandings regarding task ownership, deadlines,
-and sprint goals. Here's why updating is critical:
+Proper use of the Planner Board eliminates misunderstandings regarding task
+ownership, deadlines, and sprint goals. Here's why updating is critical:
- **Transparency**: Keeps everyone informed about task progress.
-- **Accountability**: Ensures team members are responsible for their assigned tasks.
+- **Accountability**: Ensures team members are responsible for their assigned
+ tasks.
- **Efficiency**: Reduces the need for repeated clarifications during meetings.
### Tips for Regular Updates:
- Schedule a daily check-in to update task statuses.
-- Encourage all team members to take responsibility for keeping their cards current.
-- Use tags or labels like _Blocked_, _Urgent_, or _Critical_ to highlight important tasks.
+- Encourage all team members to take responsibility for keeping their cards
+ current.
+- Use tags or labels like _Blocked_, _Urgent_, or _Critical_ to highlight
+ important tasks.
---
## Understanding Upstream Reviews
-Upstream reviews ensure that pull requests (PRs) are aligned with project standards before merging.
-This is closely tied to task cards in the Planner Board.
+Upstream reviews ensure that pull requests (PRs) are aligned with project
+standards before merging. This is closely tied to task cards in the Planner
+Board.

@@ -145,15 +160,18 @@ This is closely tied to task cards in the Planner Board.
## Planner Boards Across Semesters
-As OnTrack evolves, you may work with different Planner Boards in various semesters. To ensure
-consistency:
+As OnTrack evolves, you may work with different Planner Boards in various
+semesters. To ensure consistency:
-- **Semester Boards**: Use separate boards for each semester to track progress and goals.
-- **Agile Alignment**: Ensure that Agile Cards are utilized for consistency across all boards.
-- **Archiving**: Archive old boards to declutter and maintain focus on current work.
+- **Semester Boards**: Use separate boards for each semester to track progress
+ and goals.
+- **Agile Alignment**: Ensure that Agile Cards are utilized for consistency
+ across all boards.
+- **Archiving**: Archive old boards to declutter and maintain focus on current
+ work.
---
-By following these guidelines, you can use Planner Boards and Agile Cards effectively to stay
-organized, work better as a team, and keep your project running smoothly while keeping your mentor
-updated.
+By following these guidelines, you can use Planner Boards and Agile Cards
+effectively to stay organized, work better as a team, and keep your project
+running smoothly while keeping your mentor updated.
diff --git a/src/content/docs/Products/OnTrack/06-pull-request-template.md b/src/content/docs/Products/OnTrack/06-pull-request-template.md
index 7aa80bac6..d5f12e2bf 100644
--- a/src/content/docs/Products/OnTrack/06-pull-request-template.md
+++ b/src/content/docs/Products/OnTrack/06-pull-request-template.md
@@ -8,30 +8,33 @@ sidebar:
## Template for making a pull request
-When making a pull request to the Ontrack repository, please use the following template to ensure
-that your pull request covers all the required steps and can be reviewed by your peers. The template
-includes a checklist of items that you need to complete before submitting your pull request, some of
-which may not be relevant to your specific pull request. Please ensure that you complete all the
+When making a pull request to the Ontrack repository, please use the following
+template to ensure that your pull request covers all the required steps and can
+be reviewed by your peers. The template includes a checklist of items that you
+need to complete before submitting your pull request, some of which may not be
+relevant to your specific pull request. Please ensure that you complete all the
relevant items before submitting your pull request.
```markdown
# Description
-Please include a summary of the changes and the related issue. Please also include relevant
-motivation and context. List any dependencies that are required for this change.
+Please include a summary of the changes and the related issue. Please also
+include relevant motivation and context. List any dependencies that are required
+for this change.
## Type of change
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
-- [ ] Breaking change (fix or feature that would cause existing functionality to not work as
- expected)
+- [ ] Breaking change (fix or feature that would cause existing functionality to
+ not work as expected)
- [ ] Documentation (update or new)
## How Has This Been Tested?
-Please describe the tests that you ran to verify your changes. Provide instructions so we can
-reproduce. Please also list any relevant details for your test configuration.
+Please describe the tests that you ran to verify your changes. Provide
+instructions so we can reproduce. Please also list any relevant details for your
+test configuration.
- [ ] Tested in latest Chrome
- [ ] Tested in latest Firefox
@@ -69,4 +72,5 @@ Please list the folders and files added/modified with this pull request.
- [ ] folder/file
```
-Please refer to Pull Request Guide for more information on creating a pull request.
+Please refer to Pull Request Guide for more information on creating a pull
+request.
diff --git a/src/content/docs/Products/OnTrack/07-peer-review-web.mdx b/src/content/docs/Products/OnTrack/07-peer-review-web.mdx
index c298c6b43..32a1d0f94 100644
--- a/src/content/docs/Products/OnTrack/07-peer-review-web.mdx
+++ b/src/content/docs/Products/OnTrack/07-peer-review-web.mdx
@@ -11,32 +11,38 @@ import { Aside } from "@astrojs/starlight/components";
-In Ontrack, peer reviews are a vital process to ensure code quality, maintainability, and
-consistency across the website development project. Every pull request (PR) must follow the
-Peer-Review Checklist, which checks for key factors like functionality, code readability, and
-documentation.
+In Ontrack, peer reviews are a vital process to ensure code quality,
+maintainability, and consistency across the website development project. Every
+pull request (PR) must follow the Peer-Review Checklist, which checks for key
+factors like functionality, code readability, and documentation.
-Additionally, the Peer-Review Prompts serve as a conversation starter for reviewers, encouraging
-collaboration while allowing for a thorough and constructive review process.
+Additionally, the Peer-Review Prompts serve as a conversation starter for
+reviewers, encouraging collaboration while allowing for a thorough and
+constructive review process.
### Ontrack Peer-Review Checklist
-The following checklist is required to be completed for every review to ensure high-quality
-contributions.
+The following checklist is required to be completed for every review to ensure
+high-quality contributions.
```plaintext
## General Information
@@ -84,51 +90,59 @@ contributions.
- **Code Quality**:
- [ ] Repository: Ensure the PR is made to the correct repository.
- - [ ] Readability: Is the code easy to read and follow? Are comments included where necessary?
- - [ ] Maintainability: Can this code be maintained or extended easily in the future?
+ - [ ] Readability: Is the code easy to read and follow? Are comments included
+ where necessary?
+ - [ ] Maintainability: Can this code be maintained or extended easily in the
+ future?
- **Functionality**:
- [ ] Correctness: Does the code meet the task requirements?
- - [ ] Existing Functionality: Has the impact on existing functionality been considered and tested?
+ - [ ] Existing Functionality: Has the impact on existing functionality been
+ considered and tested?
- **Testing**:
- [ ] Test Coverage: Are unit tests provided for new or modified code?
- [ ] Test Results: Have all tests passed successfully?
- **Documentation**:
- - [ ] Documentation: Is the inline and external documentation updated and clear?
+ - [ ] Documentation: Is the inline and external documentation updated and
+ clear?
- **Pull Request Details**:
- [ ] PR Description: Is the problem being solved clearly described?
- - [ ] Checklist Completion: Have all relevant checklist items been reviewed and completed?
+ - [ ] Checklist Completion: Have all relevant checklist items been reviewed
+ and completed?
---
## OnTrack Peer-Review Prompts
-Use these prompts to guide discussions and ensure high-quality code contributions:
-
-- **Type of Change**: Is the PR correctly identifying the type of change (bug fix, new feature,
- etc.)?
-- **Code Readability**: Is the code well-structured and easy to follow? Could better comments,
- names, or organization improve it?
-- **Maintainability**: Is the code modular and easy to maintain? Does it introduce any technical
- debt?
-- **Code Simplicity**: Are there redundant or overly complex parts of the code that could be
- simplified?
-- **Edge Cases**: Does the code account for edge cases? What scenarios might cause it to break?
-- **Test Thoroughness**: Does the testing cover all edge cases and failure paths? Are there enough
- tests to ensure code reliability?
-- **Backward Compatibility**: Does the change break any existing functionality? If so, is backward
- compatibility handled or documented?
-- **Performance Considerations**: Could this code impact performance negatively? Can it be optimized
- while maintaining readability?
-- **Security Concerns**: Does this change introduce any security risks? Is input validation handled
- properly?
-- **Dependencies**: Are new dependencies necessary? Could they conflict with existing libraries?
- Could this functionality be achieved without new dependencies?
-- **Documentation**: Is the documentation clear and thorough enough for new developers to
- understand? Does it cover API or external interface changes?
+Use these prompts to guide discussions and ensure high-quality code
+contributions:
+
+- **Type of Change**: Is the PR correctly identifying the type of change (bug
+ fix, new feature, etc.)?
+- **Code Readability**: Is the code well-structured and easy to follow? Could
+ better comments, names, or organization improve it?
+- **Maintainability**: Is the code modular and easy to maintain? Does it
+ introduce any technical debt?
+- **Code Simplicity**: Are there redundant or overly complex parts of the code
+ that could be simplified?
+- **Edge Cases**: Does the code account for edge cases? What scenarios might
+ cause it to break?
+- **Test Thoroughness**: Does the testing cover all edge cases and failure
+ paths? Are there enough tests to ensure code reliability?
+- **Backward Compatibility**: Does the change break any existing functionality?
+ If so, is backward compatibility handled or documented?
+- **Performance Considerations**: Could this code impact performance negatively?
+ Can it be optimized while maintaining readability?
+- **Security Concerns**: Does this change introduce any security risks? Is input
+ validation handled properly?
+- **Dependencies**: Are new dependencies necessary? Could they conflict with
+ existing libraries? Could this functionality be achieved without new
+ dependencies?
+- **Documentation**: Is the documentation clear and thorough enough for new
+ developers to understand? Does it cover API or external interface changes?
---
@@ -136,44 +150,54 @@ Use these prompts to guide discussions and ensure high-quality code contribution
### `.mdx` Files
-- **Content Accuracy**: Ensure that the content is clear and accurate. Double-check for any errors
- in documentation or guides.
-- **Frontmatter**: Ensure the frontmatter (`title`, `description`, etc.) is correctly filled out.
-- **Component Usage**: Verify that components like `LinkCard` or `CardGrid` are used appropriately
- within `.mdx` files.
+- **Content Accuracy**: Ensure that the content is clear and accurate.
+ Double-check for any errors in documentation or guides.
+- **Frontmatter**: Ensure the frontmatter (`title`, `description`, etc.) is
+ correctly filled out.
+- **Component Usage**: Verify that components like `LinkCard` or `CardGrid` are
+ used appropriately within `.mdx` files.
### `.css` Files
-- **Consistency**: Check alignment with the **Styling Guide** and consistent use of variables (e.g.,
- colors, fonts, spacing).
-- **Accessibility**: Ensure animations respect user preferences, and contrast ratios meet **WCAG 2.1
- AA** standards.
+- **Consistency**: Check alignment with the **Styling Guide** and consistent use
+ of variables (e.g., colors, fonts, spacing).
+- **Accessibility**: Ensure animations respect user preferences, and contrast
+ ratios meet **WCAG 2.1 AA** standards.
- **Naming Conventions**: Verify CSS class names follow consistent patterns.
### `.jsx`/`.tsx` Files
-- **Functionality**: Validate that interactive components (e.g., forms, sliders) work as expected
- and meet task requirements.
+- **Functionality**: Validate that interactive components (e.g., forms, sliders)
+ work as expected and meet task requirements.
- **Performance**: Identify unnecessary re-renders or performance concerns.
-- **Code Style**: Ensure compliance with **React/JSX** best practices and linting rules.
+- **Code Style**: Ensure compliance with **React/JSX** best practices and
+ linting rules.
### `.astro` Files
-- **Structure**: Verify page/component structure aligns with **Astro standards**.
-- **Reusability**: Look for repetitive code that could be refactored into reusable components.
+- **Structure**: Verify page/component structure aligns with **Astro
+ standards**.
+- **Reusability**: Look for repetitive code that could be refactored into
+ reusable components.
---
## Useful Resources for Reviewers
-- **Starlight Documentation**: [Starlight Docs](https://starlight.astro.build/getting-started/)
-- **Astro Documentation**: [Astro Docs](https://docs.astro.build/en/getting-started/)
-- **WCAG 2.1 AA Guidelines**: [W3C Accessibility Standards](https://www.w3.org/WAI/WCAG21/quickref/)
-- **MDN CSS Documentation**: [MDN CSS Guide](https://developer.mozilla.org/en-US/docs/Web/CSS)
-- **React Documentation**: [React Official Docs](https://reactjs.org/docs/getting-started.html)
+- **Starlight Documentation**:
+ [Starlight Docs](https://starlight.astro.build/getting-started/)
+- **Astro Documentation**:
+ [Astro Docs](https://docs.astro.build/en/getting-started/)
+- **WCAG 2.1 AA Guidelines**:
+ [W3C Accessibility Standards](https://www.w3.org/WAI/WCAG21/quickref/)
+- **MDN CSS Documentation**:
+ [MDN CSS Guide](https://developer.mozilla.org/en-US/docs/Web/CSS)
+- **React Documentation**:
+ [React Official Docs](https://reactjs.org/docs/getting-started.html)
---
-By following these guidelines, you’ll help maintain high standards of code quality, performance, and
-accessibility in the OnTrack project. Peer reviews not only ensure the quality of the code but also
-foster collaboration and shared learning among the team.
+By following these guidelines, you’ll help maintain high standards of code
+quality, performance, and accessibility in the OnTrack project. Peer reviews not
+only ensure the quality of the code but also foster collaboration and shared
+learning among the team.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Deployment/Enhanced_Authentication/LDAP-and-devise-research-documentation.md b/src/content/docs/Products/OnTrack/Documentation/Deployment/Enhanced_Authentication/LDAP-and-devise-research-documentation.md
index c48696faa..10afef394 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Deployment/Enhanced_Authentication/LDAP-and-devise-research-documentation.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Deployment/Enhanced_Authentication/LDAP-and-devise-research-documentation.md
@@ -24,24 +24,26 @@ title: Research Documentation:LDAP Server & Devise
## Introduction
-The current distribution of OnTrack supports numerous authentication methods, but all such methods
-use external resources to provide the authentication services. As such, it has been proposed that a
-**Lightweight Directory Access Protocol (LDAP) server** be created and added to the current Docker
-container mix used for OnTrack. This would allow hosting and testing of the LDAP server’s
-authentication capabilities. How to implement an LDAP server to suit these needs, however, is
-uncertain.
-
-Within this, it has been suggested that **Devise**, an authentication solution that can be
-integrated with LDAP server, could be used as part of achieving this goal of creating an in-house
-authentication methodology. There are also Devise modules that are already configured to work
-seamlessly with an LDAP server setup. However, it is uncertain whether Devise is needed to achieve
-the full goals of the Enhance Authentication team (as outlined on the team Trello Card) and whether
-Devise can be used without its Ruby on Rails-based UI (as OnTrack makes use of Angular on the
-frontend to communicate with the backend).
-
-Hence, it is clear that there needs to be some research conducted into the LDAP servers and Devise
-to determine the best course of action when integrating these features into the pre-established
-OnTrack architecture.
+The current distribution of OnTrack supports numerous authentication methods,
+but all such methods use external resources to provide the authentication
+services. As such, it has been proposed that a **Lightweight Directory Access
+Protocol (LDAP) server** be created and added to the current Docker container
+mix used for OnTrack. This would allow hosting and testing of the LDAP server’s
+authentication capabilities. How to implement an LDAP server to suit these
+needs, however, is uncertain.
+
+Within this, it has been suggested that **Devise**, an authentication solution
+that can be integrated with LDAP server, could be used as part of achieving this
+goal of creating an in-house authentication methodology. There are also Devise
+modules that are already configured to work seamlessly with an LDAP server
+setup. However, it is uncertain whether Devise is needed to achieve the full
+goals of the Enhance Authentication team (as outlined on the team Trello Card)
+and whether Devise can be used without its Ruby on Rails-based UI (as OnTrack
+makes use of Angular on the frontend to communicate with the backend).
+
+Hence, it is clear that there needs to be some research conducted into the LDAP
+servers and Devise to determine the best course of action when integrating these
+features into the pre-established OnTrack architecture.
---
@@ -50,15 +52,18 @@ OnTrack architecture.
The aims of this research and its concurrent documentation are as follows:
1. Gain a better understanding of what an LDAP server is;
-2. Understand how an LDAP server could be used to reach our in-house authentication goals;
+2. Understand how an LDAP server could be used to reach our in-house
+ authentication goals;
3. Gain a better understanding of what Devise is and what services it provides;
-4. Gain insight into how a Devise LDAP server would be set up and integrated into the current
- OnTrack architecture, including adding it to the Docker container mix;
-5. Determine whether Devise’s use of a Ruby on Rails UI will impede the ability to utilise it, given
- that OnTrack’s frontend uses Angular and communicates with the backend via the Application
- Programming Interface (API);
-6. If (from _Research Aim 5_) it is determined that Devise **_can_** be used with the current
- OnTrack frontend and backend setup, investigate how it will be set up and integrated.
+4. Gain insight into how a Devise LDAP server would be set up and integrated
+ into the current OnTrack architecture, including adding it to the Docker
+ container mix;
+5. Determine whether Devise’s use of a Ruby on Rails UI will impede the ability
+ to utilise it, given that OnTrack’s frontend uses Angular and communicates
+ with the backend via the Application Programming Interface (API);
+6. If (from _Research Aim 5_) it is determined that Devise **_can_** be used
+ with the current OnTrack frontend and backend setup, investigate how it will
+ be set up and integrated.
---
@@ -66,15 +71,17 @@ The aims of this research and its concurrent documentation are as follows:
### LDAP Servers
-A Lightweight Directory Access Protocol server facilitates client-server queries of directories over
-the TCP/IP Internet protocol [1]. Similar to a database, the directories stored by the LDAP database
-contain attribute-based information which can be queried by clients and responded to by the server
-to achieve information-checking goals [1]. In the case of the OnTrack system, the LDAP server would
-store relevant user details which would be queried by clients in order to approve or deny the
-specific user access to the OnTrack systems – providing an authentication system.
+A Lightweight Directory Access Protocol server facilitates client-server queries
+of directories over the TCP/IP Internet protocol [1]. Similar to a database, the
+directories stored by the LDAP database contain attribute-based information
+which can be queried by clients and responded to by the server to achieve
+information-checking goals [1]. In the case of the OnTrack system, the LDAP
+server would store relevant user details which would be queried by clients in
+order to approve or deny the specific user access to the OnTrack systems –
+providing an authentication system.
-The LDAP protocol allows for the following operations to be conducted within the database/directory
-accessed by the LDAP server [2]:
+The LDAP protocol allows for the following operations to be conducted within the
+database/directory accessed by the LDAP server [2]:
- Add: Adds new files and/or entries;
- Delete: Removes files and/or entries;
@@ -82,160 +89,187 @@ accessed by the LDAP server [2]:
- Compare: Determine the similarities and differences between files; and
- Modify: alter an existing file/entry.
-All of these operations are essential to adding the appropriate user details which are used during
-the OnTrack authentication process. Additionally, the lightweight nature of LDAP and directories
-results in the ability to handle high volumes of traffic and quick response times to client-server
-queries [1] [2]. These two elements of LDAP servers make it ideal for the OnTrack authentication
+All of these operations are essential to adding the appropriate user details
+which are used during the OnTrack authentication process. Additionally, the
+lightweight nature of LDAP and directories results in the ability to handle high
+volumes of traffic and quick response times to client-server queries [1] [2].
+These two elements of LDAP servers make it ideal for the OnTrack authentication
process, especially due to the high number of users of the service.
-Hence, an LDAP server could be utilised to achieve our in-house authentication goals through:
+Hence, an LDAP server could be utilised to achieve our in-house authentication
+goals through:
-- Utilising the current user information database or creating a new database purely for
- authentication queries.
-- Coding the LDAP server to respond to client requests for access to the systems by accessing the
- user information database and cross-referencing the details provided by the client and those
- stored in the database, giving appropriate responses based on whether the data matches (success –
- the user has been authenticated and access can be provided) or whether the data is mismatched
+- Utilising the current user information database or creating a new database
+ purely for authentication queries.
+- Coding the LDAP server to respond to client requests for access to the systems
+ by accessing the user information database and cross-referencing the details
+ provided by the client and those stored in the database, giving appropriate
+ responses based on whether the data matches (success – the user has been
+ authenticated and access can be provided) or whether the data is mismatched
(failure – the user is unable to be authenticated and access is denied).
-- The creation and use of the LDAP server results in the OnTrack authentication process becoming
- fully independent – meeting the primary authentication goal of in-house authentication.
+- The creation and use of the LDAP server results in the OnTrack authentication
+ process becoming fully independent – meeting the primary authentication goal
+ of in-house authentication.
### Devise
-Devise is a module-based authentication solution created in the Ruby on Rails programming language
-[3]. Taken directly from the [Devise GitHub Repo] (), the 10
-modules within Devise are outlined by the creators as follows:
-
-> - Database Authenticatable: hashes and stores a password in the database to validate the
-> authenticity of a user while signing in. The authentication can be done both through POST
-> requests or HTTP Basic Authentication.
-> - Omniauthable: adds OmniAuth () support.
-> - Confirmable: sends emails with confirmation instructions and verifies whether an account is
-> already confirmed during sign in.
+Devise is a module-based authentication solution created in the Ruby on Rails
+programming language [3]. Taken directly from the [Devise GitHub Repo]
+(), the 10 modules within Devise are
+outlined by the creators as follows:
+
+> - Database Authenticatable: hashes and stores a password in the database to
+> validate the authenticity of a user while signing in. The authentication can
+> be done both through POST requests or HTTP Basic Authentication.
+> - Omniauthable: adds OmniAuth ()
+> support.
+> - Confirmable: sends emails with confirmation instructions and verifies
+> whether an account is already confirmed during sign in.
> - Recoverable: resets the user password and sends reset instructions.
-> - Registerable: handles signing up users through a registration process, also allowing them to
-> edit and destroy their account.
-> - Rememberable: manages generating and clearing a token for remembering the user from a saved
-> cookie.
+> - Registerable: handles signing up users through a registration process, also
+> allowing them to edit and destroy their account.
+> - Rememberable: manages generating and clearing a token for remembering the
+> user from a saved cookie.
> - Trackable: tracks sign in count, timestamps and IP address.
-> - Timeoutable: expires sessions that have not been active in a specified period of time.
-> - Validatable: provides validations of email and password. It's optional and can be customized, so
-> you're able to define your own validations.
-> - Lockable: locks an account after a specified number of failed sign-in attempts. Can unlock via
-> email or after a specified time period. (Source:
+> - Timeoutable: expires sessions that have not been active in a specified
+> period of time.
+> - Validatable: provides validations of email and password. It's optional and
+> can be customized, so you're able to define your own validations.
+> - Lockable: locks an account after a specified number of failed sign-in
+> attempts. Can unlock via email or after a specified time period. (Source:
> [Devise GitHub Repo](https://github.com/heartcombo/devise))
-Hence, the Devise authentication solution provides a wholistic authentication service, one which
-would be highly suitable to meet the OnTrack authentication goals.
+Hence, the Devise authentication solution provides a wholistic authentication
+service, one which would be highly suitable to meet the OnTrack authentication
+goals.
### Devise LDAP Authentication
-Devise can be used concurrently with LDAP servers to provide complete authentication solutions. In
-fact, GitHub user Curtis Schiewek has already created an integrated Devise LDAP authentication
-solution called [Devise LDAP Authenticatable]
-(). Devise LDAP Authenticatable is a
-plugin which allows for the services of Devise to be used with a pre-existing LDAP server (which we
-aim to create) and in line with the Devise framework [4]. However, due to this plugin making use of
-the Ruby on Rails programming within Devise, it must first be determined if Devise (and, hence, the
-Devise LDAP Authenticatable) can be used as part of the OnTrack architecture.
-
-Devise utilises Ruby on Rails (shortened to “Rails”) as the language framework for the web
-applications user interface (UI) that use its authentication services [4]. As such, the credentials
-entered into the Rails UI would be those which are authenticated by the Devise back-end
-authentication service; the communication is Rails-to-Rails [4]. However, the OnTrack architecture
-currently utilises the AngularJS programming language as the framework for the web application
-front-end UI, making use of a Rails-based API to communicate with the backend. Additionally, there
-is a current migration underway within the OnTrack architecture which will result in the change from
-utilising AngularJS to having the UI programmed using Angular (TypesScript).
-
-As such, a solution to the UI/backend communication based on AngularJS would be redundant due to
-this language migration occuring, especially as the Devise LDAP server would likely not be ready to
-implement until after the conclusion of this migration. Hence, the research conducted needs to
-determine the following: whether it is possible to utilise the Rails-based authentication services
-of Devise within an Angular (TypeScript) UI and Rails API architecture.
+Devise can be used concurrently with LDAP servers to provide complete
+authentication solutions. In fact, GitHub user Curtis Schiewek has already
+created an integrated Devise LDAP authentication solution called [Devise LDAP
+Authenticatable] ().
+Devise LDAP Authenticatable is a plugin which allows for the services of Devise
+to be used with a pre-existing LDAP server (which we aim to create) and in line
+with the Devise framework [4]. However, due to this plugin making use of the
+Ruby on Rails programming within Devise, it must first be determined if Devise
+(and, hence, the Devise LDAP Authenticatable) can be used as part of the OnTrack
+architecture.
+
+Devise utilises Ruby on Rails (shortened to “Rails”) as the language framework
+for the web applications user interface (UI) that use its authentication
+services [4]. As such, the credentials entered into the Rails UI would be those
+which are authenticated by the Devise back-end authentication service; the
+communication is Rails-to-Rails [4]. However, the OnTrack architecture currently
+utilises the AngularJS programming language as the framework for the web
+application front-end UI, making use of a Rails-based API to communicate with
+the backend. Additionally, there is a current migration underway within the
+OnTrack architecture which will result in the change from utilising AngularJS to
+having the UI programmed using Angular (TypesScript).
+
+As such, a solution to the UI/backend communication based on AngularJS would be
+redundant due to this language migration occuring, especially as the Devise LDAP
+server would likely not be ready to implement until after the conclusion of this
+migration. Hence, the research conducted needs to determine the following:
+whether it is possible to utilise the Rails-based authentication services of
+Devise within an Angular (TypeScript) UI and Rails API architecture.
### Angular (TypeScript) UI & Devise Integration
-From extensive research regarding _if_ and, if it can, _how_ Devise can be used in an application
-architecture which has an Angular (TypeScript) UI, it was found that **there are solutions available
-to facilitate this**.
+From extensive research regarding _if_ and, if it can, _how_ Devise can be used
+in an application architecture which has an Angular (TypeScript) UI, it was
+found that **there are solutions available to facilitate this**.
-There are numerous token-based authentication solutions available through GitHub which enable Devise
-to communicate with a variety of programming languages which may be used within the architecture of
-applications. Of relevance to our proposed implementation are the
+There are numerous token-based authentication solutions available through GitHub
+which enable Devise to communicate with a variety of programming languages which
+may be used within the architecture of applications. Of relevance to our
+proposed implementation are the
[Devise Token Auth](https://github.com/lynndylanhurley/devise_token_auth) and
-[Angular Token](https://github.com/neroniaky/angular-token) GitHub repositories. Devise Token Auth
-implements a token-based method of authentication for use with Devise, its functionalities being
-able to be harnessed through referencing the solution within the appropriate Gemfile [5]. This type
-of token-based authentication can be implemented as part of our solution to meet the authentication
-goals. More importantly, Angular Token works to facilitate communication between Angular
-(TypeScript) solutions and the Rail-based services of Devise [6]. It seamlessly works in conjunction
-with the Devise Token Auth service, with the Devise Token Auth repository even providing a demo of
-these two solutions successfully integrated [5] [6].
-
-Hence, to facilitate the communication between OnTrack’s Angular (TypeScript) UI and the Rails-based
-services of Devise, it is suggested that these two token-based authentication solutions be
-integrated within the application architecture as demonstrated in the ‘Setup’ sections of their
-respective repositories.
+[Angular Token](https://github.com/neroniaky/angular-token) GitHub repositories.
+Devise Token Auth implements a token-based method of authentication for use with
+Devise, its functionalities being able to be harnessed through referencing the
+solution within the appropriate Gemfile [5]. This type of token-based
+authentication can be implemented as part of our solution to meet the
+authentication goals. More importantly, Angular Token works to facilitate
+communication between Angular (TypeScript) solutions and the Rail-based services
+of Devise [6]. It seamlessly works in conjunction with the Devise Token Auth
+service, with the Devise Token Auth repository even providing a demo of these
+two solutions successfully integrated [5] [6].
+
+Hence, to facilitate the communication between OnTrack’s Angular (TypeScript) UI
+and the Rails-based services of Devise, it is suggested that these two
+token-based authentication solutions be integrated within the application
+architecture as demonstrated in the ‘Setup’ sections of their respective
+repositories.
### Devise LDAP Server Setup & Integration
-The following briefly outline the steps that will be involved in setting up and implementing the
-Devise LDAP server authentication solution as part of the OnTrack architecture:
-
-- Run the [OpenLDAP Docker image](https://github.com/osixia/docker-openldap) and follow the
- instructions for setting up a new LDAP server. OpenLDAP has been selected due to it being
- open-source, reliable, and also its being used as part of the Devise LDAP Authenticatable solution
- which we will also be using. Additionally, using a Docker image of an OpenLDAP server which allows
- for the creation of new LDAP servers suited to the needs of the company will allow for development
- to begin with a solid base and make adding the authentication server solution to the Docker
- container mix easier;
-- Setup and perform an initial configuration of [Devise](https://github.com/heartcombo/devise)
- through adding it to the appropriate Gemfile and following the instructions detailed in the
- [Devise README.md] () file;
+The following briefly outline the steps that will be involved in setting up and
+implementing the Devise LDAP server authentication solution as part of the
+OnTrack architecture:
+
+- Run the [OpenLDAP Docker image](https://github.com/osixia/docker-openldap) and
+ follow the instructions for setting up a new LDAP server. OpenLDAP has been
+ selected due to it being open-source, reliable, and also its being used as
+ part of the Devise LDAP Authenticatable solution which we will also be using.
+ Additionally, using a Docker image of an OpenLDAP server which allows for the
+ creation of new LDAP servers suited to the needs of the company will allow for
+ development to begin with a solid base and make adding the authentication
+ server solution to the Docker container mix easier;
+- Setup and perform an initial configuration of
+ [Devise](https://github.com/heartcombo/devise) through adding it to the
+ appropriate Gemfile and following the instructions detailed in the [Devise
+ README.md] () file;
- Setup and perform an initial configuration of [Devise LDAP Authenticatable]
- () through following the processes
- outlined in the
+ () through following
+ the processes outlined in the
[Devise LDAP Authenticatable README.md](https://github.com/cschiewek/devise_ldap_authenticatable/blob/default/README.md)
file;
- Initially configure and integrate both the
[Devise Token Auth](https://github.com/lynndylanhurley/devise_token_auth) and
- [Angular Token](https://github.com/neroniaky/angular-token) token-based solutions by following the
- installation and configuration information detailed in their respective README.md files:
- found[here]( \_token_auth/blob/ master/README.md) and
+ [Angular Token](https://github.com/neroniaky/angular-token) token-based
+ solutions by following the installation and configuration information detailed
+ in their respective README.md files:
+ found[here]( \_token_auth/blob/
+ master/README.md) and
[here](https://github.com/neroniaky/angular-token/blob/master/README.md);
-- Finish the configuration of these elements by populating them with real data and integrating the
- elements into the OnTrack architecture, ensuring that they all communicate correctly and respond
- as expected.
+- Finish the configuration of these elements by populating them with real data
+ and integrating the elements into the OnTrack architecture, ensuring that they
+ all communicate correctly and respond as expected.
-As the OpenLDAP server created is drawn from a Docker Image, it is thought that this will simplify
-the process of adding the finished server to the Docker container mix used for the OnTrack
-deployment.
+As the OpenLDAP server created is drawn from a Docker Image, it is thought that
+this will simplify the process of adding the finished server to the Docker
+container mix used for the OnTrack deployment.
-Again, this is a very high-level view of the expected setup and integration flow of how the Devise
-LDAP server will be developed. As upskilling surrounding these technologies and their respective
-code/languages is conducted, this methodology may be made redundant or be found to have missing
-items. However, this will be the current basis of how development will occur for now.
+Again, this is a very high-level view of the expected setup and integration flow
+of how the Devise LDAP server will be developed. As upskilling surrounding these
+technologies and their respective code/languages is conducted, this methodology
+may be made redundant or be found to have missing items. However, this will be
+the current basis of how development will occur for now.
---
## Research Outcomes
-Through an examination of the Research Findings documented, understandings of what LDAP servers are
-and how they work; what Devise is and what services it offers; and how LDAP and Devise can be
-harnessed to provide an authentication solution have been vastly improved.
-
-Most importantly, from this conducted research, it has been found that **Devise can be used as part
-of the OnTrack in-house authentication solution**. As previously detailed, two token-based services
-work to facilitate communication between OnTrack’s Angular UI and Devise’s Rails-based and, as such,
-integrating these services will allow for the use of Devise within our authentication solution.
-
-A high-level overview of how the Devise LDAP server solution will be developed has also been given.
-However, as mentioned, future upskilling into the practical elements of these technologies may
-result in the actual implementation methodologies being found to be much different. However, as
-determining whether Devise can be used _at all_ was deemed to be the most important aspect of this
-research (as it impacts all other aspects of developing the in-house authentication solution), such
-a broad overview will be sufficient for now.
+Through an examination of the Research Findings documented, understandings of
+what LDAP servers are and how they work; what Devise is and what services it
+offers; and how LDAP and Devise can be harnessed to provide an authentication
+solution have been vastly improved.
+
+Most importantly, from this conducted research, it has been found that **Devise
+can be used as part of the OnTrack in-house authentication solution**. As
+previously detailed, two token-based services work to facilitate communication
+between OnTrack’s Angular UI and Devise’s Rails-based and, as such, integrating
+these services will allow for the use of Devise within our authentication
+solution.
+
+A high-level overview of how the Devise LDAP server solution will be developed
+has also been given. However, as mentioned, future upskilling into the practical
+elements of these technologies may result in the actual implementation
+methodologies being found to be much different. However, as determining whether
+Devise can be used _at all_ was deemed to be the most important aspect of this
+research (as it impacts all other aspects of developing the in-house
+authentication solution), such a broad overview will be sufficient for now.
---
@@ -247,9 +281,11 @@ a broad overview will be sufficient for now.
[2] Okta (n.d.). What Is LDAP & How Does It Work? [Webpage]. Available:
-[3] heartcombo (n.d.) Devise [GitHub repository]. Available:
+[3] heartcombo (n.d.) Devise [GitHub repository]. Available:
+
-[4] C. Schiewek (2020, Jul. 24). Devise LDAP Authenticatable [GitHub repository]. Available:
+[4] C. Schiewek (2020, Jul. 24). Devise LDAP Authenticatable [GitHub
+repository]. Available:
[5] L. D. Hurley (n.d.). Devise Token Auth [GitHub repository]. Available:
diff --git a/src/content/docs/Products/OnTrack/Documentation/Deployment/Enhanced_Authentication/current-and-proposed-authentication-evaluation-5.1.md b/src/content/docs/Products/OnTrack/Documentation/Deployment/Enhanced_Authentication/current-and-proposed-authentication-evaluation-5.1.md
index 030efeebb..a6043cb64 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Deployment/Enhanced_Authentication/current-and-proposed-authentication-evaluation-5.1.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Deployment/Enhanced_Authentication/current-and-proposed-authentication-evaluation-5.1.md
@@ -12,112 +12,129 @@ title: Review of Current and Proposed Authentication Solutions
[Proposed Authentication Enhancements](#proposed-authentication-enhancements)
- [The Proposed Authentication Mechanism](#the-proposed-authentication-mechanism)
-- [Advancements of the Previous Authentication Mechanisms and how it Addresses Issues of the Old
- Method]
+- [Advancements of the Previous Authentication Mechanisms and how it Addresses
+ Issues of the Old Method]
(#advancements-of-the-previous-authentication-mechanisms-and-how-it-addresses-issues-of-the-old-method)
- [Potential Issues and Concerns that must be Considered](#potential-issues-and-concerns-that-must-be-considered)
## Overview
-The purpose of this documentation is to formally review the current authentication mechanisms which
-are in place within the OnTrack architecture and compare this to the proposed authentication
-solution in development by the Deployment project team (Enhance Authentication). Within this
-evaluation, the current authentication setup and mechanisms will be described, and then analysed in
-terms of the risks and issues encompassed within such a setup. The new authentication solution which
-is proposed to be developed and implemented will then be described, analysing how this new system
-aims to advance the authentication capabilities of its predecessor and how it will work to mitigate
-the risks and issues of the current system which it will replace. Additionally, the issues and
-concerns which may develop as a result of the implementation of the proposed authentication system
-will be examined, including the regulatory and compliance considerations which must be addressed
-during the development of the new system.
+The purpose of this documentation is to formally review the current
+authentication mechanisms which are in place within the OnTrack architecture and
+compare this to the proposed authentication solution in development by the
+Deployment project team (Enhance Authentication). Within this evaluation, the
+current authentication setup and mechanisms will be described, and then analysed
+in terms of the risks and issues encompassed within such a setup. The new
+authentication solution which is proposed to be developed and implemented will
+then be described, analysing how this new system aims to advance the
+authentication capabilities of its predecessor and how it will work to mitigate
+the risks and issues of the current system which it will replace. Additionally,
+the issues and concerns which may develop as a result of the implementation of
+the proposed authentication system will be examined, including the regulatory
+and compliance considerations which must be addressed during the development of
+the new system.
## The State of the Current Authentication Mechanisms
### The Current Authentication Setup
-Currently, the OnTrack system relies upon the authentication provided by its client, Deakin
-University. The authentication mechanism utilised by Deakin University is a Single Sign On (SSO)
-system which utilises Multi-Factor Authentication (MFA) technology through _Duo Security_. As part
-of the SSO process, the user is prompted to enter their username and password to log in to the
-Deakin SSO system and, as such, to allow access to OnTrack. Once the username and password have been
-verified, the MFA system requires the user to confirm that they are the rightful owner of the
-account through responding to a “login request” sent to their allocated personal device. Once this
-test is passed, access is granted to the OnTrack system. In the case where the user is already
-logged in to Deakin SSO, has been accessing other Deakin services and then chooses to visit the
-OnTrack site, they are automatically logged in through the SSO functionality. Hence, the current
-authentication system is reliant upon third-party authentication, whose MFA capabilities have been
-outsourced (to Duo Security).
+Currently, the OnTrack system relies upon the authentication provided by its
+client, Deakin University. The authentication mechanism utilised by Deakin
+University is a Single Sign On (SSO) system which utilises Multi-Factor
+Authentication (MFA) technology through _Duo Security_. As part of the SSO
+process, the user is prompted to enter their username and password to log in to
+the Deakin SSO system and, as such, to allow access to OnTrack. Once the
+username and password have been verified, the MFA system requires the user to
+confirm that they are the rightful owner of the account through responding to a
+“login request” sent to their allocated personal device. Once this test is
+passed, access is granted to the OnTrack system. In the case where the user is
+already logged in to Deakin SSO, has been accessing other Deakin services and
+then chooses to visit the OnTrack site, they are automatically logged in through
+the SSO functionality. Hence, the current authentication system is reliant upon
+third-party authentication, whose MFA capabilities have been outsourced (to Duo
+Security).
### Risks and Issues
-An analysis of the current authentication system presented the following issues and risks:
-
-As OnTrack relies upon Deakin SSO services, it is assumed that both the login and logout
-functionalities would be handled by this service. However, even after a user has logged out Deakin
-SSO and (supposedly) all concurrent accounts accessed through the single login, a user’s OnTrack
-account remains logged in and accessible for a period of days (perhaps weeks). This security flaw
-means that the token used to access the OnTrack account first through the Deakin SSO technology
-continues to be stored within the user’s browser for an extended time period, allowing the OnTrack
-account to be repeatedly accessed even when the user is not currently signed in to Deakin SSO. For
-cases where the user is on a shared computer, this is a high risk for unfiltered access into the
-user’s account by other actors. Currently, the only method to properly sign out of the OnTrack
-system is for a user to select their avatar icon and, from the displayed drop-down menu, to choose
-“sign out” from there. This is often overlooked, especially as it would be assumed that the logout
-process would be handled by Deakin SSO services. Even when the user does follow the OnTrack process
-to sign out of the account, the user is redirected to a broken link where the page cannot be
-displayed – the token is cleared from the browser. Although this does solve the issue at hand, it is
-not an ideal user experience
-
-A second risk is that this current method of authentication relies on third-party and outsourced
-authentication technologies. Deakin SSO and MFA is facilitated by Duo Security, although these
-technologies may be highly efficient, secure and reliable, such a reliance on third-party software
-means that the backend workings of this software is not able to be accessed and understood by the
-OnTrack team. Additionally, use of third-party software requires additional sharing, transmission,
-and storage of user information on systems which are not able to be managed by the OnTrack team. In
-the case of a security breach, the OnTrack team are also forced to rely upon the providers of these
-technologies to:
+An analysis of the current authentication system presented the following issues
+and risks:
+
+As OnTrack relies upon Deakin SSO services, it is assumed that both the login
+and logout functionalities would be handled by this service. However, even after
+a user has logged out Deakin SSO and (supposedly) all concurrent accounts
+accessed through the single login, a user’s OnTrack account remains logged in
+and accessible for a period of days (perhaps weeks). This security flaw means
+that the token used to access the OnTrack account first through the Deakin SSO
+technology continues to be stored within the user’s browser for an extended time
+period, allowing the OnTrack account to be repeatedly accessed even when the
+user is not currently signed in to Deakin SSO. For cases where the user is on a
+shared computer, this is a high risk for unfiltered access into the user’s
+account by other actors. Currently, the only method to properly sign out of the
+OnTrack system is for a user to select their avatar icon and, from the displayed
+drop-down menu, to choose “sign out” from there. This is often overlooked,
+especially as it would be assumed that the logout process would be handled by
+Deakin SSO services. Even when the user does follow the OnTrack process to sign
+out of the account, the user is redirected to a broken link where the page
+cannot be displayed – the token is cleared from the browser. Although this does
+solve the issue at hand, it is not an ideal user experience
+
+A second risk is that this current method of authentication relies on
+third-party and outsourced authentication technologies. Deakin SSO and MFA is
+facilitated by Duo Security, although these technologies may be highly
+efficient, secure and reliable, such a reliance on third-party software means
+that the backend workings of this software is not able to be accessed and
+understood by the OnTrack team. Additionally, use of third-party software
+requires additional sharing, transmission, and storage of user information on
+systems which are not able to be managed by the OnTrack team. In the case of a
+security breach, the OnTrack team are also forced to rely upon the providers of
+these technologies to:
- Report that the breach happened
- Report which details and systems have been compromised
- Fix the issues which lead to the breach
- Secure the system and continue normal business operations
-Having to rely upon other vendors for these processes removes the control and information
-transparency OnTrack has regarding the scale and nature of security breaches. This leaves the
-company “in the dark” about what has occurred and is frustrating, especially when there are capable
-members within the company who would be able to respond to such events, perhaps more efficiently
-than these vendors.
+Having to rely upon other vendors for these processes removes the control and
+information transparency OnTrack has regarding the scale and nature of security
+breaches. This leaves the company “in the dark” about what has occurred and is
+frustrating, especially when there are capable members within the company who
+would be able to respond to such events, perhaps more efficiently than these
+vendors.
-Hence, from these risks and issues associated with the current OnTrack authentication mechanisms, it
-is clear why new technologies and methods of authentication have been proposed to be developed and
-implemented.
+Hence, from these risks and issues associated with the current OnTrack
+authentication mechanisms, it is clear why new technologies and methods of
+authentication have been proposed to be developed and implemented.
## Proposed Authentication Enhancements
-This section will detail the proposed authentication elements to be added to the OnTrack
-architecture in order to improve the state of the authentication mechanisms and to alleviate some of
-the risks and errors in the described current setup.
+This section will detail the proposed authentication elements to be added to the
+OnTrack architecture in order to improve the state of the authentication
+mechanisms and to alleviate some of the risks and errors in the described
+current setup.
### The Proposed Authentication Mechanism
-The proposed solution to be implemented by the Enhance Authentication team has several elements, as
-follows:
+The proposed solution to be implemented by the Enhance Authentication team has
+several elements, as follows:
-- An extension of the current OnTrack API, adding functionality to facilitate user management
-- Password management features, for users and admin manipulation. Users can either send a request to
- the admin to authorise the password or do so themselves. It is important that, as part of this
- feature, the admin is able to manipulate the password (i.e. send the request for a password
- change). The password is never transferred in plaintext at any time during this process.
-- A Devise LDAP server option which handles the authentication processes for OnTrack, allowing the
- authentication to be performed fully “in-house” rather than outsourced to other authentication
- mechanisms.
+- An extension of the current OnTrack API, adding functionality to facilitate
+ user management
+- Password management features, for users and admin manipulation. Users can
+ either send a request to the admin to authorise the password or do so
+ themselves. It is important that, as part of this feature, the admin is able
+ to manipulate the password (i.e. send the request for a password change). The
+ password is never transferred in plaintext at any time during this process.
+- A Devise LDAP server option which handles the authentication processes for
+ OnTrack, allowing the authentication to be performed fully “in-house” rather
+ than outsourced to other authentication mechanisms.
### Advancements of the Previous Authentication Mechanisms and how it Addresses
### Issues of the Old Method
-From the proposed elements to be added to the authentication mechanisms, the following advancements
-and addressing of issues relevant to the current system are achieved:
+From the proposed elements to be added to the authentication mechanisms, the
+following advancements and addressing of issues relevant to the current system
+are achieved:
| Element | Advancement |
| ---------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
@@ -130,33 +147,40 @@ and addressing of issues relevant to the current system are achieved:
### Potential Issues and Concerns that must be Considered
-Within the development and deployment phases related to implementing these proposed authentication
-mechanisms, the following must be closely considered and addressed:
-
-- It must be ensured that, while system admins have the ability to grant access to change the
- passwords, that they cannot directly access the passwords themselves. There must be a clear
- separation of information, and passwords must have secure encryption applied to them.
-- Password change requests must be validated to ensure that they are coming from the user, and not
- someone pretending to be the user in order to gain control over the account. A security mechanism
- such as requiring the requestee’s date of birth, or an answer to a security question set by the
- user, is suggested. Additionally, sending an automatic email or text to inform the user of the
+Within the development and deployment phases related to implementing these
+proposed authentication mechanisms, the following must be closely considered and
+addressed:
+
+- It must be ensured that, while system admins have the ability to grant access
+ to change the passwords, that they cannot directly access the passwords
+ themselves. There must be a clear separation of information, and passwords
+ must have secure encryption applied to them.
+- Password change requests must be validated to ensure that they are coming from
+ the user, and not someone pretending to be the user in order to gain control
+ over the account. A security mechanism such as requiring the requestee’s date
+ of birth, or an answer to a security question set by the user, is suggested.
+ Additionally, sending an automatic email or text to inform the user of the
request for a change of password is also recommended
-- The developed authentication system, including how user information is stored, transmitted, and
- later deleted (which includes considerations of retention laws) must adhere to the appropriate
- laws and guidelines set out by the Australian Federal Government (including the _Privacy Act
- 1988_), as well as other specifications mandated within each of the Australian states and
+- The developed authentication system, including how user information is stored,
+ transmitted, and later deleted (which includes considerations of retention
+ laws) must adhere to the appropriate laws and guidelines set out by the
+ Australian Federal Government (including the _Privacy Act 1988_), as well as
+ other specifications mandated within each of the Australian states and
territories.
-- Additionally, as OnTrack provides its services to international users (both in terms of
- international students studying online at Deakin and through providing its services to
- international clients), OnTrack’s authentication and information storage processes must also
- adhere to the laws enacted within the relevant international jurisdictions. For example, the data
- protection laws enacted within the GDPR must be adhered to by companies who provide services to
- citizens protected under the GDPR, regardless of where the company is situated (see
- )。
-- Finally, as all code used within the OnTrack architecture is open-source and publicly available
- through the Thoth Tech company site, how this effects the security of the proposed authentication
- system once developed and added to the site must be considered. Specifically, it must be examined
- whether public access to the internal workings of the authentication system increases the threat
- of a data breach as anyone can view the code, find vulnerabilities, and then create exploits to
- leverage such vulnerabilities. Hence, it is recommended that thorough vulnerability analysis is
- conducted on the code and systems to be included within the proposed authentication solutions。
+- Additionally, as OnTrack provides its services to international users (both in
+ terms of international students studying online at Deakin and through
+ providing its services to international clients), OnTrack’s authentication and
+ information storage processes must also adhere to the laws enacted within the
+ relevant international jurisdictions. For example, the data protection laws
+ enacted within the GDPR must be adhered to by companies who provide services
+ to citizens protected under the GDPR, regardless of where the company is
+ situated (see )。
+- Finally, as all code used within the OnTrack architecture is open-source and
+ publicly available through the Thoth Tech company site, how this effects the
+ security of the proposed authentication system once developed and added to the
+ site must be considered. Specifically, it must be examined whether public
+ access to the internal workings of the authentication system increases the
+ threat of a data breach as anyone can view the code, find vulnerabilities, and
+ then create exploits to leverage such vulnerabilities. Hence, it is
+ recommended that thorough vulnerability analysis is conducted on the code and
+ systems to be included within the proposed authentication solutions。
diff --git a/src/content/docs/Products/OnTrack/Documentation/Deployment/Enhanced_Authentication/testing-strategy-enhance-authentication.md b/src/content/docs/Products/OnTrack/Documentation/Deployment/Enhanced_Authentication/testing-strategy-enhance-authentication.md
index a3af5afe6..e748eb453 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Deployment/Enhanced_Authentication/testing-strategy-enhance-authentication.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Deployment/Enhanced_Authentication/testing-strategy-enhance-authentication.md
@@ -16,31 +16,36 @@ title: Testing Strategy for Enhance Authentication
## Introduction
-This testing strategy describes the features and artifacts that the Enhance Authentication team will
-be contributing to the OnTrack architecture, particularly focusing on the specifics regarding the
-testing of these elements once developed. Conducting testing according to this Testing Strategy is
-paramount to ensure that the created elements are functioning as expected before they are added to
-the main OnTrack GitHub repository and associated Docker containers for deployment.
+This testing strategy describes the features and artifacts that the Enhance
+Authentication team will be contributing to the OnTrack architecture,
+particularly focusing on the specifics regarding the testing of these elements
+once developed. Conducting testing according to this Testing Strategy is
+paramount to ensure that the created elements are functioning as expected before
+they are added to the main OnTrack GitHub repository and associated Docker
+containers for deployment.
## Overview of Deliverables to be Tested
-As part of the development of elements conducted by the Enhance Authentication team, the following
-features will be created and will require testing:
+As part of the development of elements conducted by the Enhance Authentication
+team, the following features will be created and will require testing:
-- Extend the current features within the OnTrack API to allow for user management to be achieved
-- Add a feature to allow users and admins to change user passwords, facilitating users being able to
- request admin to change the user’s password due to it being forgotten
-- Add a Devise LDAP server option to facilitate in-house authentication of users of the OnTrack
- system
+- Extend the current features within the OnTrack API to allow for user
+ management to be achieved
+- Add a feature to allow users and admins to change user passwords, facilitating
+ users being able to request admin to change the user’s password due to it
+ being forgotten
+- Add a Devise LDAP server option to facilitate in-house authentication of users
+ of the OnTrack system
-These features, once functioning, will be the deliverables of the Enhance Authentication team.
-Additionally, this testing strategy will also be a deliverable, as will any research documentation
-conducted in the process of implementing these features.
+These features, once functioning, will be the deliverables of the Enhance
+Authentication team. Additionally, this testing strategy will also be a
+deliverable, as will any research documentation conducted in the process of
+implementing these features.
## References
-The following resources are relevant to the work that is to be done by the Enhance Authentication
-team.
+The following resources are relevant to the work that is to be done by the
+Enhance Authentication team.
Links to resources used for as part of development and testing:
@@ -48,8 +53,8 @@ Links to resources used for as part of development and testing:
- Visual Studio Code:
- Docker:
-Links to the relevant OnTrack repositories which will be accessed and altered by the team to
-implement the new authentication features:
+Links to the relevant OnTrack repositories which will be accessed and altered by
+the team to implement the new authentication features:
- Doubtfire-Web:
- Doubtfire-Deploy:
@@ -57,7 +62,8 @@ implement the new authentication features:
Links to resources describing the coding languages used:
-- Ruby-on-Rails for updating of the API to add new features:
+- Ruby-on-Rails for updating of the API to add new features:
+
- Angular/Typescript for front-end development:
Links to resources relevant to the Devise LDAP server:
@@ -71,57 +77,68 @@ Links to resources relevant to the Devise LDAP server:
## QA Deliverables
-As part of the processes to provide Quality Assurance within our deliverables, the following
-artifacts will be produced in line with our development processes:
-
-- Testing Plan: This is official recording and documentation of the processes undertaken as part of
- the testing phase. Included within the testing plan is details of each specific test undertaken on
- a developed feature, and documents the test number, scenario, inputs, and the expected versus
- actual results. This allows for our team to ensure that the final testing outcomes meet all
- requirements and expectations of the deliverables, and allows the testing processes and outcomes
- to be viewed and understood by others, both within the team and wider company. A template of the
- Thoth Tech Testing Plan can be found here:
+As part of the processes to provide Quality Assurance within our deliverables,
+the following artifacts will be produced in line with our development processes:
+
+- Testing Plan: This is official recording and documentation of the processes
+ undertaken as part of the testing phase. Included within the testing plan is
+ details of each specific test undertaken on a developed feature, and documents
+ the test number, scenario, inputs, and the expected versus actual results.
+ This allows for our team to ensure that the final testing outcomes meet all
+ requirements and expectations of the deliverables, and allows the testing
+ processes and outcomes to be viewed and understood by others, both within the
+ team and wider company. A template of the Thoth Tech Testing Plan can be found
+ here:
-- Test Case documentation: This is an official recording of the details regarding a specific testing
- scenario and will be different depending on the feature to be tested ( for example, the Test Case
- for the extension of the API management features will be different from that of the LDAP Devise
- Server). The Test Case includes further details regarding the environment which the testing was
- conducted (including details regarding operating systems and versions of software) and the
- sequence of steps which were performed to create the test and implement it. Overall, this
- documentation provides detail into the specifics of each test on the developed features, including
- suffice detail for others to understand the conditions of the testing process and, if applicable,
- to replicate the test themselves. It is closely linked to the information recorded within the
- Testing Plan.
+- Test Case documentation: This is an official recording of the details
+ regarding a specific testing scenario and will be different depending on the
+ feature to be tested ( for example, the Test Case for the extension of the API
+ management features will be different from that of the LDAP Devise Server).
+ The Test Case includes further details regarding the environment which the
+ testing was conducted (including details regarding operating systems and
+ versions of software) and the sequence of steps which were performed to create
+ the test and implement it. Overall, this documentation provides detail into
+ the specifics of each test on the developed features, including suffice detail
+ for others to understand the conditions of the testing process and, if
+ applicable, to replicate the test themselves. It is closely linked to the
+ information recorded within the Testing Plan.
## Test Management
-This section outlines the resources that will be used during the testing processes for the API user
-management extension and the integration of a Devise LDAP server into the OnTrack architecture:
+This section outlines the resources that will be used during the testing
+processes for the API user management extension and the integration of a Devise
+LDAP server into the OnTrack architecture:
- GitHub will be used to facilitate version control of the tests developed
-- Visual Studio code will be used to create tests relevant to both the user management/API extension
- and the Devise LDAP server integration, ensuring that all components of the expected functionality
- are tested
-- Ruby-on-Rails will be used to create tests for functionality of features integrated within the API
-- The data used within the testing will be users and data that have been created specifically for
- the testing processes. The functionality of the users and their data simulate the real users and
- data of the OnTrack system to facilitate realistic testing without effecting the actual users
- during the testing phase
-- Docker will be used to build the OnTrack environment to allow for testing to be conducted within
- it, and to view the effects of the added features on how the environment runs
+- Visual Studio code will be used to create tests relevant to both the user
+ management/API extension and the Devise LDAP server integration, ensuring that
+ all components of the expected functionality are tested
+- Ruby-on-Rails will be used to create tests for functionality of features
+ integrated within the API
+- The data used within the testing will be users and data that have been created
+ specifically for the testing processes. The functionality of the users and
+ their data simulate the real users and data of the OnTrack system to
+ facilitate realistic testing without effecting the actual users during the
+ testing phase
+- Docker will be used to build the OnTrack environment to allow for testing to
+ be conducted within it, and to view the effects of the added features on how
+ the environment runs
## Scope of Testing
This section outlines the type of tests which exist within the OnTrack project.
-- There are API test files and processes written in Rails which already exist which are relevant to
- testing other processes within the OnTrack system. While these tests are not able to be used for
- our testing purposes, they do provide examples of how to write the testing processes and provide
- sample user accounts and data which can be utilised within the testing of our features
-- New testing processes will be written by the Enhance Authentication developers as part of the
- development and testing phases, based on pre-existing test files within the OnTrack architecture
- and using some of the testing processes that have already been developed
-- Regarding the Devise LDAP server, the respective GitHub pages for these technologies (referenced
- above) also include processes for testing their implementation. These guidelines may also be
- consulted within the testing phases, especially in the earlier parts of interacting with these new
+- There are API test files and processes written in Rails which already exist
+ which are relevant to testing other processes within the OnTrack system. While
+ these tests are not able to be used for our testing purposes, they do provide
+ examples of how to write the testing processes and provide sample user
+ accounts and data which can be utilised within the testing of our features
+- New testing processes will be written by the Enhance Authentication developers
+ as part of the development and testing phases, based on pre-existing test
+ files within the OnTrack architecture and using some of the testing processes
+ that have already been developed
+- Regarding the Devise LDAP server, the respective GitHub pages for these
+ technologies (referenced above) also include processes for testing their
+ implementation. These guidelines may also be consulted within the testing
+ phases, especially in the earlier parts of interacting with these new
technologies
diff --git a/src/content/docs/Products/OnTrack/Documentation/Deployment/Google Cloud/google-cloud-research.md b/src/content/docs/Products/OnTrack/Documentation/Deployment/Google Cloud/google-cloud-research.md
index 3d783ded2..9d4af41a2 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Deployment/Google Cloud/google-cloud-research.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Deployment/Google Cloud/google-cloud-research.md
@@ -26,8 +26,8 @@ title: Google Cloud - Research Documentation
## Introduction
-The Deployment project aim is to create an employee-run deployment of OnTrack separated from the
-existing Deakin version and hosted on Google Cloud.
+The Deployment project aim is to create an employee-run deployment of OnTrack
+separated from the existing Deakin version and hosted on Google Cloud.
## Aims
@@ -39,96 +39,108 @@ existing Deakin version and hosted on Google Cloud.
### Google Cloud
-**Google Cloud Platform (GCP)** is a cloud service platform that allows you to build cloud resources
-and platforms, leveraging cloud native services and features.
+**Google Cloud Platform (GCP)** is a cloud service platform that allows you to
+build cloud resources and platforms, leveraging cloud native services and
+features.
-Google Compute Engine (GCE) is the Infrastructure as a Service (IaaS) component of Google CLoud
-Platform (GCP) [1]. It is the service that provides virtual machines (VMs) as server resources in
-the cloud.
+Google Compute Engine (GCE) is the Infrastructure as a Service (IaaS) component
+of Google CLoud Platform (GCP) [1]. It is the service that provides virtual
+machines (VMs) as server resources in the cloud.
-Cloud Build and Cloud Run are services offered by Google Cloud to achieve part of the CI/CD
-deployment. Cloud Build is designed to help you execute your builds on Google Cloud from your source
-code in your git repositories. Cloud Run is designed to run containers as a serverless, compute
-platform.
+Cloud Build and Cloud Run are services offered by Google Cloud to achieve part
+of the CI/CD deployment. Cloud Build is designed to help you execute your builds
+on Google Cloud from your source code in your git repositories. Cloud Run is
+designed to run containers as a serverless, compute platform.
-Once your Google Cloud account and project is setup, there are IAM users & roles which will need to
-be setup to access Google Cloud services. The account administrator would also need to enable the
-desired Google Cloud services, such as Cloud Build and Cloud Run, as they are not enabled by
-default. IAM users & roles will then need to be assigned to the enabled services.
+Once your Google Cloud account and project is setup, there are IAM users & roles
+which will need to be setup to access Google Cloud services. The account
+administrator would also need to enable the desired Google Cloud services, such
+as Cloud Build and Cloud Run, as they are not enabled by default. IAM users &
+roles will then need to be assigned to the enabled services.
### CI/CD Deployments
-Once you are setup with a Google Cloud account and project, you can setup a CI/CD pipeline to
-perform the steps of build, test, and deploy to Google Cloud. The source code will need to be in a
-git repository and Google Cloud will need access to monitor the actions of the git repository. For
-example, you can setup either commits or pull requests to be monitored which will trigger a build
+Once you are setup with a Google Cloud account and project, you can setup a
+CI/CD pipeline to perform the steps of build, test, and deploy to Google Cloud.
+The source code will need to be in a git repository and Google Cloud will need
+access to monitor the actions of the git repository. For example, you can setup
+either commits or pull requests to be monitored which will trigger a build
within Cloud Build.
### Docker containers
-**Docker** allows you to package and run an application in a loosely isolated environment [2]
-referred to as a container. The container is a portable and lightweight image that contains
-everything needed to run an application without any reliance of installed apps or components on the
-host server.
+**Docker** allows you to package and run an application in a loosely isolated
+environment [2] referred to as a container. The container is a portable and
+lightweight image that contains everything needed to run an application without
+any reliance of installed apps or components on the host server.
-Docker compose is the tool to define and run Docker applications. A `Dockerfile` defines the app's
-environment. A `docker-compose.yaml` file defines the services that make up your app in YAML. And
-`docker compose up` starts and runs your entire app [3].
+Docker compose is the tool to define and run Docker applications. A `Dockerfile`
+defines the app's environment. A `docker-compose.yaml` file defines the services
+that make up your app in YAML. And `docker compose up` starts and runs your
+entire app [3].
### Artifact Registry
-Initially, we reviewed the use of Google Container Registry (GCR) service in Google Cloud as a
-repository for container images. However upon reading Google Cloud's documentation, it highlighted
-that Artifact Registry is the recommended service for managing container images [4] artifacts for
-Google Cloud projects. Private images can be pushed to a GCR repository and pulled for use within
-GCP.
+Initially, we reviewed the use of Google Container Registry (GCR) service in
+Google Cloud as a repository for container images. However upon reading Google
+Cloud's documentation, it highlighted that Artifact Registry is the recommended
+service for managing container images [4] artifacts for Google Cloud projects.
+Private images can be pushed to a GCR repository and pulled for use within GCP.
### OnTrack/Doubtfire deployment and components
-The **Doubtfire** (commonly known as **OnTrack**) deployment guide we referred to outlines that
-Doubtfire is deployed using Docker containers described in a docker compose [5]. The application
-involves the following components:
-
-> - a proxy, based on nginx, that handles HTTPS and routes traffic to the webserver or apiserver
-> containers.
-> - a webserver, based on nginx, that serves the static html/css/javascript/etc files.
-> - an apiserver, based on rails, that serves the restful API used by the application.
-> - an application server (pdfgen), based on rails, that uses cron jobs to periodically generate
-> PDFs from student submissions, and send status emails.
-> - a database server, based on Maria DB or MySql used by the api and application servers to persist
-> data
-> - file storage connected to the apiserver and application server for storing student work
+The **Doubtfire** (commonly known as **OnTrack**) deployment guide we referred
+to outlines that Doubtfire is deployed using Docker containers described in a
+docker compose [5]. The application involves the following components:
+
+> - a proxy, based on nginx, that handles HTTPS and routes traffic to the
+> webserver or apiserver containers.
+> - a webserver, based on nginx, that serves the static html/css/javascript/etc
+> files.
+> - an apiserver, based on rails, that serves the restful API used by the
+> application.
+> - an application server (pdfgen), based on rails, that uses cron jobs to
+> periodically generate PDFs from student submissions, and send status emails.
+> - a database server, based on Maria DB or MySql used by the api and
+> application servers to persist data
+> - file storage connected to the apiserver and application server for storing
+> student work
> - an external mail server to send emails
> - an external authentication server to authenticate users. (Source:
> [doubtfire-deploy GitHub repo](https://github.com/thoth-tech/doubtfire-deploy/blob/main/DEPLOYING.md))
-There are quite a few steps that will need to be performed to configure the above components. The
-team will require a high-level understanding of the components, services, and frameworks used for
-the setup and changes required.
+There are quite a few steps that will need to be performed to configure the
+above components. The team will require a high-level understanding of the
+components, services, and frameworks used for the setup and changes required.
### Learnings
-- After a couple of discussions with the Pipelines team and Andrew Cain, it was determined that
- Cloud Build and Cloud Run had some limitations and may not be ideal to be able to cater for the
- wider team at Thoth-Tech. As a result, the Google Cloud and Pipelines teams have moved on to
- consider other options that would be more viable and aligned to the goals of the project.
-- In order to use Artifact Registry, the service will need to be enabled in your Google Cloud
- Platform (GCP) project.
-- In order to push or pull a container image, Docker will need to be installed and configured.
-- The team will need a high-level understanding of components such as nginx, rails, pdfgen, database
- servers (MariaDB or MySql), Action Mailer, Dockerfile, and `docker-compose.yaml`.
+- After a couple of discussions with the Pipelines team and Andrew Cain, it was
+ determined that Cloud Build and Cloud Run had some limitations and may not be
+ ideal to be able to cater for the wider team at Thoth-Tech. As a result, the
+ Google Cloud and Pipelines teams have moved on to consider other options that
+ would be more viable and aligned to the goals of the project.
+- In order to use Artifact Registry, the service will need to be enabled in your
+ Google Cloud Platform (GCP) project.
+- In order to push or pull a container image, Docker will need to be installed
+ and configured.
+- The team will need a high-level understanding of components such as nginx,
+ rails, pdfgen, database servers (MariaDB or MySql), Action Mailer, Dockerfile,
+ and `docker-compose.yaml`.
## Outcomes
-Following the team's research of Google Cloud and its services, deployments, and Docker
-containers,the team have determined the following outcomes:
+Following the team's research of Google Cloud and its services, deployments, and
+Docker containers,the team have determined the following outcomes:
-1. Provide a high-level design and document architecture overview of how Google Cloud Platform (GCP)
- will be used to support overall deployment and CI/CD pipelines to run resources for this project;
+1. Provide a high-level design and document architecture overview of how Google
+ Cloud Platform (GCP) will be used to support overall deployment and CI/CD
+ pipelines to run resources for this project;
2. Organise the team's access to GCP and GCP project;
-3. Create user stories / Trello cards for configuration of components required for the Doubtfire
- deployment;
-4. Plan to build the platform requirements to deploy an instance of Doubtfire within GCP;
+3. Create user stories / Trello cards for configuration of components required
+ for the Doubtfire deployment;
+4. Plan to build the platform requirements to deploy an instance of Doubtfire
+ within GCP;
## References
diff --git a/src/content/docs/Products/OnTrack/Documentation/Deployment/deployment-epic.md b/src/content/docs/Products/OnTrack/Documentation/Deployment/deployment-epic.md
index cc593f103..ae5c6bef4 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Deployment/deployment-epic.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Deployment/deployment-epic.md
@@ -4,15 +4,16 @@ title: Deployment Epic
## Background
-A live implementation of the OnTrack platform is accessible through Deakin University and other
-organisations around the world. Currently, there is no separately hosted platform for Thoth Tech's
-code implementation to operate on.
+A live implementation of the OnTrack platform is accessible through Deakin
+University and other organisations around the world. Currently, there is no
+separately hosted platform for Thoth Tech's code implementation to operate on.
## Business Value
-Now that Thoth Tech has been established and development work on OnTrack will start to increase, we
-need a Thoth Tech hosted deployment of OnTrack. This will provide greater freedom to develop OnTrack
-to the evolving vision of the company.
+Now that Thoth Tech has been established and development work on OnTrack will
+start to increase, we need a Thoth Tech hosted deployment of OnTrack. This will
+provide greater freedom to develop OnTrack to the evolving vision of the
+company.
## In scope
@@ -29,10 +30,11 @@ New features
## What needs to happen
-The deployment team will need to create an employee-run/hosted version of OnTrack to Google Cloud,
-which will be separate to the Deakin version. This will be completed by three teams, which will all
-be working together to migrate to google cloud, build a CI/CD pipeline, and create the
-authentication for the platform for students.
+The deployment team will need to create an employee-run/hosted version of
+OnTrack to Google Cloud, which will be separate to the Deakin version. This will
+be completed by three teams, which will all be working together to migrate to
+google cloud, build a CI/CD pipeline, and create the authentication for the
+platform for students.
- Thoth Tech Ontrack deployment to Google Cloud
- CI/CD Pipeline built
@@ -57,21 +59,23 @@ N/A
## Operations/Support
-Team members may need training/upskilling in technologies such as Google Cloud, Ruby on Rails,
-Docker, etc. Members will also need testing skills to make sure all the new functionality works and
-to be able to fix any bugs/problems.
+Team members may need training/upskilling in technologies such as Google Cloud,
+Ruby on Rails, Docker, etc. Members will also need testing skills to make sure
+all the new functionality works and to be able to fix any bugs/problems.
## What are the challenges?
-Team members have no existing code to work off, as this is new project that is being implemented
-however they may be able to use the OnTrack deployment architecture as a guide. Team members may
-have also not had the opportunity to work with the technologies they will be using for the
-deployment of OnTrack to Google Cloud.
+Team members have no existing code to work off, as this is new project that is
+being implemented however they may be able to use the OnTrack deployment
+architecture as a guide. Team members may have also not had the opportunity to
+work with the technologies they will be using for the deployment of OnTrack to
+Google Cloud.
## Acceptance criteria
- Validate architecture and planning documentation with leadership
- CI/CD pipeline has testing, linting and security built into it
- Documentation is accurate to current version of products
-- Thoth Tech deployment is successfully hosted on Google Cloud and functions as expected
+- Thoth Tech deployment is successfully hosted on Google Cloud and functions as
+ expected
- Allow for admins to change passwords (but cannot access passwords)
diff --git a/src/content/docs/Products/OnTrack/Documentation/Deployment/overview.md b/src/content/docs/Products/OnTrack/Documentation/Deployment/overview.md
index 1190c8d6f..c741a5444 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Deployment/overview.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Deployment/overview.md
@@ -18,41 +18,44 @@ title: Google Cloud - Overview
## Overview
-The Google Cloud team's main deliverable for the project is to deploy a student- run version of
-OnTrack which is hosted on Google Cloud Platform (GCP). The goal is for Thoth Tech to have their own
-deployment to develop OnTrack that is separate to Deakin's deployment.
+The Google Cloud team's main deliverable for the project is to deploy a student-
+run version of OnTrack which is hosted on Google Cloud Platform (GCP). The goal
+is for Thoth Tech to have their own deployment to develop OnTrack that is
+separate to Deakin's deployment.

-The Google cloud team have deployed a functional instance of Doubtfire (also known as OnTrack)
-hosted on GCP.
+The Google cloud team have deployed a functional instance of Doubtfire (also
+known as OnTrack) hosted on GCP.
-Our GCP project is centrally managed by Deakin IT where the team have been granted access using our
-Deakin Google Workspace accounts.
+Our GCP project is centrally managed by Deakin IT where the team have been
+granted access using our Deakin Google Workspace accounts.
- GCP Project Name: sit-22t1-ontrack-deplo-d026375
- GCP Project ID: sit-22t1-ontrack-deplo-d026375
-In Google Compute Engine, we have created server instance running Linux (Centos 7.x). The instance
-is a small, initial footprint that has a publicly facing network and accessible over the internet
-via HTTPS (port 443).
+In Google Compute Engine, we have created server instance running Linux (Centos
+7.x). The instance is a small, initial footprint that has a publicly facing
+network and accessible over the internet via HTTPS (port 443).
-We have used the source code from the Thoth Tech repository for the deployment into GCP, where we
-used docker compose to deploy the images for the components required to run Doubtfire (api server,
-app server, doubtfire-web, mariadb, nginx).
+We have used the source code from the Thoth Tech repository for the deployment
+into GCP, where we used docker compose to deploy the images for the components
+required to run Doubtfire (api server, app server, doubtfire-web, mariadb,
+nginx).
## Initial stages
-Initially, the Google Cloud team had spent time understanding GCP, Docker, and the Doubtfire
-deployment.
+Initially, the Google Cloud team had spent time understanding GCP, Docker, and
+the Doubtfire deployment.
### Tests via localhost
-Prior to deploying to GCP, we ran several tests locally (localhost) on our own workstations to
-determine the configuration changes required deploy Doubtfire successfully. On our individual
-workstations, we cloned the [Doubtfire-deploy-GCP repository]
- and modified the necessary files. We then used
-docker compose and Docker to run and deploy containers.
+Prior to deploying to GCP, we ran several tests locally (localhost) on our own
+workstations to determine the configuration changes required deploy Doubtfire
+successfully. On our individual workstations, we cloned the
+[Doubtfire-deploy-GCP repository]
+ and modified the necessary
+files. We then used docker compose and Docker to run and deploy containers.

@@ -60,18 +63,19 @@ Success! We have Docker containers running locally.

-Success again! We have OnTrack being hosted locally and is accessible via .
+Success again! We have OnTrack being hosted locally and is accessible via
+ .

### Google Compute Engine instance
-Once we determined the configuration changes required to be able to run locally (localhost), we then
-needed to determine how to create and deploy a Compute Engine server instance in GCP that we could
-use to deploy Doubtfire.
+Once we determined the configuration changes required to be able to run locally
+(localhost), we then needed to determine how to create and deploy a Compute
+Engine server instance in GCP that we could use to deploy Doubtfire.
-We started with a small, initial footprint and deployed a basic virtual machine (VM) instance with
-following details;
+We started with a small, initial footprint and deployed a basic virtual machine
+(VM) instance with following details;
- Name: instance-1
- Zone: australia-southeast2-a
@@ -83,9 +87,10 @@ following details;
- Firewalls: HTTP, HTTPS enabled
- GPUs/Display device: None, disabled
-Once we had the instance up and running, we connected to the instance using command-line shell via
-SSH. In the Google Cloud console, you can view the options to connect by clicking the drop-down menu
-beside _Connect SSH_ on the instance view.
+Once we had the instance up and running, we connected to the instance using
+command-line shell via SSH. In the Google Cloud console, you can view the
+options to connect by clicking the drop-down menu beside _Connect SSH_ on the
+instance view.

@@ -105,32 +110,37 @@ There were a few packages that needed to be installed on the host, such as -
- openssl
- nano (optional)
-Once installed using `yum`, we had the minimum requirements to get started on the Doubtfire
-deployment.
+Once installed using `yum`, we had the minimum requirements to get started on
+the Doubtfire deployment.
### Deploying OnTrack
-From here, we pulled down the doubtfire-deploy repository, generated a new certificate and private
-key for the host, and ran docker compose to deploy the containers for the OnTrack deployment.
+From here, we pulled down the doubtfire-deploy repository, generated a new
+certificate and private key for the host, and ran docker compose to deploy the
+containers for the OnTrack deployment.

-And we can browse to OnTrack over the internet using our public IP address via HTTPS (port 443).
+And we can browse to OnTrack over the internet using our public IP address via
+HTTPS (port 443).

## Next stages
-Since we have a functional and publicly accessible instance of Doubtfire running in GCP, the next
-stages would be to focus on the Deployment team project objectives, such as -
+Since we have a functional and publicly accessible instance of Doubtfire running
+in GCP, the next stages would be to focus on the Deployment team project
+objectives, such as -
-- Create a CI/CD pipeline that automates the building, deployment, and validation of the Thoth Tech
- OnTrack deployment onto GCP.
+- Create a CI/CD pipeline that automates the building, deployment, and
+ validation of the Thoth Tech OnTrack deployment onto GCP.
- LDAP authentication for OnTrack.
- Email notifications configured with an SMTP server.
-- Review security posture and instance sizing of the Thoth Tech OnTrack deployment in GCP.
+- Review security posture and instance sizing of the Thoth Tech OnTrack
+ deployment in GCP.
-Here's a high-level diagram of using CI/CD pipeline to automate the deployment of OnTrack onto GCP -
+Here's a high-level diagram of using CI/CD pipeline to automate the deployment
+of OnTrack onto GCP -

diff --git a/src/content/docs/Products/OnTrack/Documentation/Deployment/software-requirements-specifications-document.md b/src/content/docs/Products/OnTrack/Documentation/Deployment/software-requirements-specifications-document.md
index b50b74ddf..9e23e1616 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Deployment/software-requirements-specifications-document.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Deployment/software-requirements-specifications-document.md
@@ -6,59 +6,66 @@ title: Software Requirements Specifications Document
1.1 Product purpose
-Currently, there are three different product purposes within the deployment deliverable. The first
-of which is the google cloud team, and their focus is creating a student-run / student-hosted
-deployment of Ontrack separate from the Deakin Version. The second team is working on developing a
-pipeline between the two versions of the OnTrack system, that will streamline development and
-functionality. Lastly, the Enhance authentication team is working on creating new authentication
-systems for students to access their OnTrack account.
+Currently, there are three different product purposes within the deployment
+deliverable. The first of which is the google cloud team, and their focus is
+creating a student-run / student-hosted deployment of Ontrack separate from the
+Deakin Version. The second team is working on developing a pipeline between the
+two versions of the OnTrack system, that will streamline development and
+functionality. Lastly, the Enhance authentication team is working on creating
+new authentication systems for students to access their OnTrack account.
1.2 & 1.3 Intended audience / use
-The intended audience is both system admins and users of OnTrack. Both the google cloud and pipeline
-team will focus on a product that intended for the use of product owners only, but the enhance
-authentication team will focus on a product that both system admins and users can use, as system
-admins will be able to reset students' passwords and Provide password management for LDAP and
-database implementations. The product users will also have enhanced functionality with
-authentication, as they will have new features which will allow them to create a new or obtain their
-old password.
+The intended audience is both system admins and users of OnTrack. Both the
+google cloud and pipeline team will focus on a product that intended for the use
+of product owners only, but the enhance authentication team will focus on a
+product that both system admins and users can use, as system admins will be able
+to reset students' passwords and Provide password management for LDAP and
+database implementations. The product users will also have enhanced
+functionality with authentication, as they will have new features which will
+allow them to create a new or obtain their old password.
1.4 Scope
-The scope of the project is to create an upgraded deployment of the OnTrack system, in which, the
-system will be student-run/ hosted via google cloud, A pipeline build that will focus on version
-control, acceptance testing, independent deployment and production deployment. Lastly, the inclusion
-of a refreshed authentication system that will assist both product users and system admins of the
-OnTrack system.
+The scope of the project is to create an upgraded deployment of the OnTrack
+system, in which, the system will be student-run/ hosted via google cloud, A
+pipeline build that will focus on version control, acceptance testing,
+independent deployment and production deployment. Lastly, the inclusion of a
+refreshed authentication system that will assist both product users and system
+admins of the OnTrack system.
## 2.Description of overall System
2.1 User requirements
-The requirements below are what is needed for both system admins and product users.
+The requirements below are what is needed for both system admins and product
+users.
users
-- Ability to access and check my passwords, including previously used ones, and change
- currentlyusing
-- Assurance that the authentication solution is secure, so that my passwords and other information
- is not publicly disclosed.
-- An authentication solution to be reliable and respond swiftly, so that I can access my account as
- needed and on-demand.
-- Ability to reset my Ontrack password myself, so I don't need to contact a system administrator
-- up-to-date version of OnTrack hosted on GCP so that I won't have to wait for the service to be
- manually updated.
+- Ability to access and check my passwords, including previously used ones, and
+ change currentlyusing
+- Assurance that the authentication solution is secure, so that my passwords and
+ other information is not publicly disclosed.
+- An authentication solution to be reliable and respond swiftly, so that I can
+ access my account as needed and on-demand.
+- Ability to reset my Ontrack password myself, so I don't need to contact a
+ system administrator
+- up-to-date version of OnTrack hosted on GCP so that I won't have to wait
+ for the service to be manually updated.
System admin
- Have user's passwords stored in a secure
-- have a database that is easy and low costing to maintenance and easy to be consistent with future
- add-ons
-- in-house authentication solution developed that meets all our authentication needs
-- Ability for students to reset their Ontrack password themself, so they don't need to contact a
- system administrator
+- have a database that is easy and low costing to maintenance and easy to be
+ consistent with future add-ons
+- in-house authentication solution developed that meets all our authentication
+ needs
+- Ability for students to reset their Ontrack password themself, so they
+ don't need to contact a system administrator
- access the OnTrack via a link to see what different developers have done.
-- student hosted version of OnTrack, as it will make it easier to complete more tasks
+- student hosted version of OnTrack, as it will make it easier to complete more
+ tasks
- pipeline to be as simple and maintainable as possible
- generic deployment pipeline that can be changed in future
@@ -66,24 +73,27 @@ System admin
Assumptions and dependencies of the product user include:
-students will forget their password to OnTrack, students' passwords will be secure, students
-will need to change their passwords, students will need a copy of their current password, system
-will be able to deal with multiple password requests at once
+students will forget their password to OnTrack, students' passwords will be
+secure, students will need to change their passwords, students will need a copy
+of their current password, system will be able to deal with multiple password
+requests at once
Assumptions and dependencies of the system admin include:
-Students will have the skillset to maintain a student deployed version of OnTrack, future iterations
-will be made to the current system, system admins will need acceptance testing, students will have
-the skillset to develop future iterations, system will handle multiple iteration updates at once
+Students will have the skillset to maintain a student deployed version of
+OnTrack, future iterations will be made to the current system, system admins
+will need acceptance testing, students will have the skillset to develop future
+iterations, system will handle multiple iteration updates at once
## 3.System Requirements
Google cloud
-- Allow dev ops engineer to deploy the docker container on Google cloud Platform.
+- Allow dev ops engineer to deploy the docker container on Google cloud
+ Platform.
- Allow access to OnTrack users via URL from anywhere.
-- Allow OnTrack developers to package Doubtfire api and Doubtfire Web into the standalone
- applications.
+- Allow OnTrack developers to package Doubtfire api and Doubtfire Web into the
+ standalone applications.
Pipeline build
diff --git a/src/content/docs/Products/OnTrack/Documentation/Deployment/user-stories.md b/src/content/docs/Products/OnTrack/Documentation/Deployment/user-stories.md
index a09b4e004..5036ff080 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Deployment/user-stories.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Deployment/user-stories.md
@@ -4,61 +4,68 @@ title: User Stories
## Provide password management for LDAP and database implementations
-As a user, I want to be able to easily access and check my password, and change it when I need, so I
-can avoid my information being leaked.
+As a user, I want to be able to easily access and check my password, and change
+it when I need, so I can avoid my information being leaked.
-As a product owner, I want to have my user's passwords stored in a secure way, so that they can
-avoid data breaches and unnecessary legal consequences.
+As a product owner, I want to have my user's passwords stored in a secure
+way, so that they can avoid data breaches and unnecessary legal consequences.
-As a product owner, I want to have a database that is easy and low costing to maintenance, easy to
-be consistent with future add-ons, so that can save my efforts.
+As a product owner, I want to have a database that is easy and low costing to
+maintenance, easy to be consistent with future add-ons, so that can save my
+efforts.
## Add LDAP Server Option
-As a product owner, I want an in-house authentication solution developed that meets all our
-authentication needs, so that we do not have to rely on third-party authentication solutions within
-the OnTrack architecture.
+As a product owner, I want an in-house authentication solution developed that
+meets all our authentication needs, so that we do not have to rely on
+third-party authentication solutions within the OnTrack architecture.
-As a user, I want to be assured that the authentication solution is secure, so that my passwords and
-other information is not publicly disclosed.
+As a user, I want to be assured that the authentication solution is secure, so
+that my passwords and other information is not publicly disclosed.
-As a user, I want the authentication solution to be reliable and respond swiftly, so that I can
-access my account as needed and on-demand.
+As a user, I want the authentication solution to be reliable and respond
+swiftly, so that I can access my account as needed and on-demand.
## Extend API with user management
-As a user, I want to be able to reset my Ontrack password myself, so I do not need to contact a
-system administrator.
+As a user, I want to be able to reset my Ontrack password myself, so I do not
+need to contact a system administrator.
-As a system administrator, I want users to be able to reset their passwords without my input. I also
-want the ability to send a password reset request to users. This will save me time and increase
-security by allowing insecure/exposed passwords to be easily changed.
+As a system administrator, I want users to be able to reset their passwords
+without my input. I also want the ability to send a password reset request to
+users. This will save me time and increase security by allowing insecure/exposed
+passwords to be easily changed.
## Thoth Tech Ontrack deployment to Google Cloud
-As a product owner, I want to have more freedom in OnTrack development of the company's evolving
-vision separated from the Deakin version.
+As a product owner, I want to have more freedom in OnTrack development of the
+company's evolving vision separated from the Deakin version.
-As a student/admin, I want to access the OnTrack via the URL from anywhere and should be able to
-perform all actions those are performed by students on Deakin version of track.
+As a student/admin, I want to access the OnTrack via the URL from anywhere and
+should be able to perform all actions those are performed by students on Deakin
+version of track.
-As a product owner, I want separation of roles as available in Deakin version of OnTrack.
+As a product owner, I want separation of roles as available in Deakin version of
+OnTrack.
-As a product owner, I want an email sent every time there is a successful build sent to production
+As a product owner, I want an email sent every time there is a successful build
+sent to production
## Ontrack Pipeline CI/CD
-As a system admin, I want to have OnTrack built and deployed automatically with minimal user
-intervention in the process, so that the latest version is available. All the building processes
-should be automated so all I must do is initiate the building.
+As a system admin, I want to have OnTrack built and deployed automatically with
+minimal user intervention in the process, so that the latest version is
+available. All the building processes should be automated so all I must do is
+initiate the building.
-As a user, I want to access an up-to-date version of OnTrack hosted on GCP (Google Cloud Platform)
-so that I will not have to wait for the service to be manually updated.
+As a user, I want to access an up-to-date version of OnTrack hosted on GCP
+(Google Cloud Platform) so that I will not have to wait for the service to be
+manually updated.
-As a developer, I want the pipeline to be as simple and maintainable as possible so that it can
-easily be updated to support future versions of OnTrack. Different versions should be obvious and
-labelled appropriately.
+As a developer, I want the pipeline to be as simple and maintainable as possible
+so that it can easily be updated to support future versions of OnTrack.
+Different versions should be obvious and labelled appropriately.
-As a developer, I want a generic deployment pipeline that can be simply changed out in the future,
-so that deployment targets for different services can be used. This should be done with
-environmental variables.
+As a developer, I want a generic deployment pipeline that can be simply changed
+out in the future, so that deployment targets for different services can be
+used. This should be done with environmental variables.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Documentation/Proposed_Google_Auth_feature.md b/src/content/docs/Products/OnTrack/Documentation/Documentation/Proposed_Google_Auth_feature.md
index fa58fa587..56e30aeee 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Documentation/Proposed_Google_Auth_feature.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Documentation/Proposed_Google_Auth_feature.md
@@ -2,10 +2,11 @@
title: Google Authentication Implementation in Ruby on Rails Introduction
---
-This report details the attempted implementation of Google authentication using the
-google-api-client gem, challenges encountered, and a proposal for a new approach using the
-google-authenticator library available here the goal is to establish a secure, robust, and efficient
-authentication system within a Ruby on Rails application.
+This report details the attempted implementation of Google authentication using
+the google-api-client gem, challenges encountered, and a proposal for a new
+approach using the google-authenticator library available here the goal is to
+establish a secure, robust, and efficient authentication system within a Ruby on
+Rails application.
## What Was Tried and Why It Didn't Work
@@ -13,28 +14,30 @@ authentication system within a Ruby on Rails application.
1. **Setup Using google-api-client:**
- Integrated the gem to handle Google OAuth 2.0.
- - Created an endpoint in authentication_api.rb for Google authentication (/auth/google).
+ - Created an endpoint in authentication_api.rb for Google authentication
+ (/auth/google).
- Configured token verification using Google::Apis::Oauth2V2::Oauth2Service.
- Generated temporary tokens for authenticated users.
2. **Challenges Identified:**
- - **Token Validation Failures:** Issues with API key configuration and scope validation caused
- intermittent failures.
- - **Complexity in Library Usage:** The google-api-client gem required extensive configuration and
- debugging for basic functionality.
- - **Session Management:** Temporary tokens generated lacked proper integration with the
- application's session flow.
+ - **Token Validation Failures:** Issues with API key configuration and scope
+ validation caused intermittent failures.
+ - **Complexity in Library Usage:** The google-api-client gem required
+ extensive configuration and debugging for basic functionality.
+ - **Session Management:** Temporary tokens generated lacked proper
+ integration with the application's session flow.
## Proposed Approach: Using google-authenticator
-The [Google-authenticator](https://github.com/jaredonline/google-authenticator) library offers a
-simplified and efficient way to implement Google OAuth 2.0. It abstracts much of the complexity of
-token validation and user data retrieval.
+The [Google-authenticator](https://github.com/jaredonline/google-authenticator)
+library offers a simplified and efficient way to implement Google OAuth 2.0. It
+abstracts much of the complexity of token validation and user data retrieval.
### Key Benefits
- Simplified integration of Google authentication.
-- Minimal configuration with a focus on token validation and secure user onboarding.
+- Minimal configuration with a focus on token validation and secure user
+ onboarding.
- Lightweight and developer-friendly, reducing overhead.
### Implementation Plan Using google-authenticator
@@ -123,7 +126,8 @@ token validation and user data retrieval.
## Conclusion
-The switch to the google-authenticator library addresses the shortcomings of the previous approach
-while simplifying the integration process. This plan provides a clear path toward a reliable and
-secure Google authentication mechanism in the application. By leveraging this lightweight library,
-we can reduce complexity and improve user experience.
+The switch to the google-authenticator library addresses the shortcomings of the
+previous approach while simplifying the integration process. This plan provides
+a clear path toward a reliable and secure Google authentication mechanism in the
+application. By leveraging this lightweight library, we can reduce complexity
+and improve user experience.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Documentation/architecture-doc.md b/src/content/docs/Products/OnTrack/Documentation/Documentation/architecture-doc.md
index 119249bdf..ec5be54fe 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Documentation/architecture-doc.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Documentation/architecture-doc.md
@@ -28,8 +28,8 @@ title: Architecture Document
### Documentation Information Summary
-This document is visual report for architecture for OnTrack consist of context and container diagram
-which explains its working.
+This document is visual report for architecture for OnTrack consist of context
+and container diagram which explains its working.
---
@@ -37,15 +37,16 @@ which explains its working.
### Purpose This document provides a high-level overview of the OnTrack system
-it intends tocommunicate the project structure and architecture to varying levels of complexity
-appropriate for various stakeholders withing the organisation and varying levels of technical
-literacy.
+it intends tocommunicate the project structure and architecture to varying
+levels of complexity appropriate for various stakeholders withing the
+organisation and varying levels of technical literacy.
### Scope This Architecture document uses a context diagram
-and container diagram to provide ahigh-level overview of the system,both are highly visual and aim
-to be easy to comprehend, the context diagram aims to be non-technical, and the container diagram
-provides further information to understand system structures.
+and container diagram to provide ahigh-level overview of the system,both are
+highly visual and aim to be easy to comprehend, the context diagram aims to be
+non-technical, and the container diagram provides further information to
+understand system structures.
---
@@ -61,19 +62,21 @@ provides further information to understand system structures.
## Architectural Goals and Constraints
-- Maintaining a base system that supports future work towards developing new or enhancing
- currentfeatures that improve the teaching and learning experience.
-- front-end components are clear to understand, user friendly, and straightforward to use.
-- System allows tutors to upload assessment tasks, resources, and assign learning outcomes to each
- task.
-- Students can set a learning outcome goal and filter tasks required for this goal, they can then
- view each task and download the related task resources.
-- Through the same task view, students can check deadlines, submit extension requests, send query's
- to tutors, and make task submissions.
-- Tutors can view and download submissions, manage extension requests, respond to queries and leave
- feedback.
-- System generates progress reports that are sent through email system, users can also track
- progress relating to each unit and their set learning outcome goals.
+- Maintaining a base system that supports future work towards developing new or
+ enhancing currentfeatures that improve the teaching and learning experience.
+- front-end components are clear to understand, user friendly, and
+ straightforward to use.
+- System allows tutors to upload assessment tasks, resources, and assign
+ learning outcomes to each task.
+- Students can set a learning outcome goal and filter tasks required for this
+ goal, they can then view each task and download the related task resources.
+- Through the same task view, students can check deadlines, submit extension
+ requests, send query's to tutors, and make task submissions.
+- Tutors can view and download submissions, manage extension requests, respond
+ to queries and leave feedback.
+- System generates progress reports that are sent through email system, users
+ can also track progress relating to each unit and their set learning outcome
+ goals.
## Use-Case View
diff --git a/src/content/docs/Products/OnTrack/Documentation/Documentation/ontrack-documentation-template-guide.md b/src/content/docs/Products/OnTrack/Documentation/Documentation/ontrack-documentation-template-guide.md
index 396e2b7aa..38cc01f6f 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Documentation/ontrack-documentation-template-guide.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Documentation/ontrack-documentation-template-guide.md
@@ -9,10 +9,11 @@ title: OnTrack Documentation Template Guide
The purpose of this guide is to explain how to properly use the
[OnTrack Documentation Template](https://github.com/thoth-tech/documentation/blob/main/docs/OnTrack/Documentation/OnTrack%20Documentation%20Template.md).
-Documentation is all about informing others. By using the documentation template, you will be able
-to provide all the information needed for others to understand your work. This guide will show you
-why each line of data in the template is included. That way, you can better understand the template
-in general.
+Documentation is all about informing others. By using the documentation
+template, you will be able to provide all the information needed for others to
+understand your work. This guide will show you why each line of data in the
+template is included. That way, you can better understand the template in
+general.
Let's get started!
@@ -30,9 +31,10 @@ There are seven sections to the template:
- Contacts for further information
- Related Documents
-Each of these sections have been included because they give your reader important information about
-your documentation. Therefore, having this important information at the very start of your
-documentation will help the reader to better understand what you have to say.
+Each of these sections have been included because they give your reader
+important information about your documentation. Therefore, having this important
+information at the very start of your documentation will help the reader to
+better understand what you have to say.
Let's look at each section more closely.
@@ -40,10 +42,11 @@ Let's look at each section more closely.
---
-The Author Information section is all about the author, or authors, of the documentation. When
-reading something that has been written, it's often very helpful to know who wrote it. That way, if
-the reader has more questions about it, or has some suggestions on the documentation itself, they
-know who to talk to.
+The Author Information section is all about the author, or authors, of the
+documentation. When reading something that has been written, it's often very
+helpful to know who wrote it. That way, if the reader has more questions about
+it, or has some suggestions on the documentation itself, they know who to talk
+to.
This section contains the following information:
@@ -51,27 +54,31 @@ This section contains the following information:
- Team
- Team (Delivery and/or Product) Lead
-An **Author**, or **Authors** when there is more than one, should be included to let the reader know
-who wrote the document. This also gives the reader someone to contact about the documentation if
-they need/want more information. Conversely, they can contact the author if the document ever needs
-to be edited, altered or updated.
+An **Author**, or **Authors** when there is more than one, should be included to
+let the reader know who wrote the document. This also gives the reader someone
+to contact about the documentation if they need/want more information.
+Conversely, they can contact the author if the document ever needs to be edited,
+altered or updated.
-The **Team** responsible for the document should also be included. This informs the reader about
-which team/project the document relates to. Hence, it would also give the reader more understanding
-about how the documentation relates to the company as a whole.
+The **Team** responsible for the document should also be included. This informs
+the reader about which team/project the document relates to. Hence, it would
+also give the reader more understanding about how the documentation relates to
+the company as a whole.
-The **Team (Delivery and/or Product) Lead** should be listed. This gives the reader an understanding
-of who is responsible for the team that the documentation relates to. Similar to the **authors**,
-this gives the reader a second point of contact. They could be contacted to get more information on
-the topic, or if the document needs to be edited, altered or updated.
+The **Team (Delivery and/or Product) Lead** should be listed. This gives the
+reader an understanding of who is responsible for the team that the
+documentation relates to. Similar to the **authors**, this gives the reader a
+second point of contact. They could be contacted to get more information on the
+topic, or if the document needs to be edited, altered or updated.
### Document Summary
---
-The Document Summary section contains information about the content of the document. In other words,
-it's all about what the documentation is trying to inform the reader about. In this section, the
-reader will be able to understand the general purpose of your documentation, and what it is about.
+The Document Summary section contains information about the content of the
+document. In other words, it's all about what the documentation is trying to
+inform the reader about. In this section, the reader will be able to understand
+the general purpose of your documentation, and what it is about.
This section contains the following information:
@@ -79,36 +86,40 @@ This section contains the following information:
- Documentation Type
- Documentation Information Summary
-A **Documentation Title** should be included within any documentation to let readers know what the
-document is about. Similar to the title of a book, this will let readers know what the information
-contained within relates to.
+A **Documentation Title** should be included within any documentation to let
+readers know what the document is about. Similar to the title of a book, this
+will let readers know what the information contained within relates to.
-The **Documentation Type** denotes to the reader what type of documentation your document is. An
-**Informative** document would explain a topic to your reader, similar to this guide. A
-**Technical** document however, would require or assume in-depth knowledge of the documentation's
-subject. This might be used for explaining specific pieces of code.
+The **Documentation Type** denotes to the reader what type of documentation your
+document is. An **Informative** document would explain a topic to your reader,
+similar to this guide. A **Technical** document however, would require or assume
+in-depth knowledge of the documentation's subject. This might be used for
+explaining specific pieces of code.
-Lastly, the **Documentation Information Summary** section is included to give the reader a general
-understanding of the document itself. Here, you can write some text about what information is
-contained within the document. This gives the reader more of an idea of what your document is about
-before they start reading, helping them to better understand.
+Lastly, the **Documentation Information Summary** section is included to give
+the reader a general understanding of the document itself. Here, you can write
+some text about what information is contained within the document. This gives
+the reader more of an idea of what your document is about before they start
+reading, helping them to better understand.
-In the template, using this guide as an example, this section could look like this:
+In the template, using this guide as an example, this section could look like
+this:
- **Documentation Title**: OnTrack Documentation Template Guide
- **Documentation Type**: Informative
- **Documentation Information Summary**:
- - This document outlines how to use the OnTrack Documentation Template. It explains each data
- point and each section of the template, giving readers a better understanding of how to utilise
- the template.
+ - This document outlines how to use the OnTrack Documentation Template. It
+ explains each data point and each section of the template, giving readers a
+ better understanding of how to utilise the template.
### Document Review Information
---
-This section of the template is included to inform your reader about how up-to-date a document is.
-Any documentation, regardless of what it is about, should always be as up-to-date as possible. This
-ensures the information your document conveys to the reader is useful and understandable to them.
+This section of the template is included to inform your reader about how
+up-to-date a document is. Any documentation, regardless of what it is about,
+should always be as up-to-date as possible. This ensures the information your
+document conveys to the reader is useful and understandable to them.
This section contains the following information:
@@ -117,28 +128,32 @@ This section contains the following information:
- Date of Previous Documentation Review
- Date of Next Documentation Review
-All documentation for Thoth Tech is housed within an online website called GitHub. Thoth Tech has
-numerous file structures in GitHub to which documents are uploaded to. The **Date of Original
-Document Submission to GitHub** should therefore be supplied. This gives the reader an understanding
-of when the document was first uploaded to GitHub. This knowledge will inform the reader of how old
-the document is, regardless of how many versions or iterations there are.
-
-A **Documentation Version** (i.e. Version 1.0 or V1.0) should also be included where there have been
-multiple versions or iterations of a particular document. A Version Number would help readers in
-working out which version of the documentation they are reading. This could come in handy if the
-reader needed to review a long-ago written (legacy) version, or if two versions needed to be
+All documentation for Thoth Tech is housed within an online website called
+GitHub. Thoth Tech has numerous file structures in GitHub to which documents are
+uploaded to. The **Date of Original Document Submission to GitHub** should
+therefore be supplied. This gives the reader an understanding of when the
+document was first uploaded to GitHub. This knowledge will inform the reader of
+how old the document is, regardless of how many versions or iterations there
+are.
+
+A **Documentation Version** (i.e. Version 1.0 or V1.0) should also be included
+where there have been multiple versions or iterations of a particular document.
+A Version Number would help readers in working out which version of the
+documentation they are reading. This could come in handy if the reader needed to
+review a long-ago written (legacy) version, or if two versions needed to be
compared.
-The **Date of Previous Documentation Review** should be included to show when the last review of the
-document was done. This would inform readers as to how long ago the documentation was last confirmed
-as up-to-date. Therefore, the reader gains a better understanding of whether the information is
-still current.
+The **Date of Previous Documentation Review** should be included to show when
+the last review of the document was done. This would inform readers as to how
+long ago the documentation was last confirmed as up-to-date. Therefore, the
+reader gains a better understanding of whether the information is still current.
-A **Date of Next Documentation Review** should be included to give readers the date when the
-document will/could/should next be reviewed. This could be recorded either by the author(s), or an
-authorised company member. Including this data ensures that the document can be reviewed
-periodically to ensure continuing accuracy. Depending on how often the data in the document may
-change or become obsolete, this could be altered.
+A **Date of Next Documentation Review** should be included to give readers the
+date when the document will/could/should next be reviewed. This could be
+recorded either by the author(s), or an authorised company member. Including
+this data ensures that the document can be reviewed periodically to ensure
+continuing accuracy. Depending on how often the data in the document may change
+or become obsolete, this could be altered.
An example could look like this:
@@ -151,55 +166,60 @@ An example could look like this:
---
-A **Key Terms** section could be included within your document to give the definitions of terms or
-phrases important to your documentation. This list could provide your reader with the background
-knowledge they need to gain the best understanding of what you are trying to say. This might be
-particularly helpful in technical documents, where a number of terms might need to be defined.
+A **Key Terms** section could be included within your document to give the
+definitions of terms or phrases important to your documentation. This list could
+provide your reader with the background knowledge they need to gain the best
+understanding of what you are trying to say. This might be particularly helpful
+in technical documents, where a number of terms might need to be defined.
### Key Links/Resources
---
-Similarly, a list of **Key Links or Resources** could also help your reader in understanding your
-documentation better. In this section you could include hyperlinks to webpages or even other Thoth
-Tech documents in GitHub to give your reader necessary background information. Having these links at
-the start of your documentation also saves the reader looking for them throughout the entire
-document. With this background information and links, this list will assist your reader in
-understanding what you have to say.
+Similarly, a list of **Key Links or Resources** could also help your reader in
+understanding your documentation better. In this section you could include
+hyperlinks to webpages or even other Thoth Tech documents in GitHub to give your
+reader necessary background information. Having these links at the start of your
+documentation also saves the reader looking for them throughout the entire
+document. With this background information and links, this list will assist your
+reader in understanding what you have to say.
### Contacts for further information
---
-Including a list of **Contacts for further information** could be beneficial to your reader. This
-list would include people who can be contacted if the reader would like more information about:
+Including a list of **Contacts for further information** could be beneficial to
+your reader. This list would include people who can be contacted if the reader
+would like more information about:
- your documentation
- the information within your documentation
- the subject of your documentation.
-Given Thoth Tech's teams come and go each trimester, it might be a good idea to list more constant
-contacts, like Deakin staff. Listing staff, or other company members not within the Capstone
-unit(s), would ensure new teams have a point of contact for more information each trimester. The
-list of contacts would ensure the new teams have an idea of who would best help them understand your
-documentation. Also, it shows the most suitable people they can ask questions to regarding the
-document.
+Given Thoth Tech's teams come and go each trimester, it might be a good idea to
+list more constant contacts, like Deakin staff. Listing staff, or other company
+members not within the Capstone unit(s), would ensure new teams have a point of
+contact for more information each trimester. The list of contacts would ensure
+the new teams have an idea of who would best help them understand your
+documentation. Also, it shows the most suitable people they can ask questions to
+regarding the document.
### Related Documents
---
-Listing **Related Documents** to your documentation could also help the reader better understand
-what you are trying to say. This section could list other documents that might aid the reader in
-understanding your work. This could include documents that:
+Listing **Related Documents** to your documentation could also help the reader
+better understand what you are trying to say. This section could list other
+documents that might aid the reader in understanding your work. This could
+include documents that:
- are related to the subject of your documentation
- would provide the reader with necessary background information
- could otherwise help your reader in understanding your document.
-For example, the beginning of this guide contains a link to the Documentation template, as it
-relates to this document. Supplying a link to the template itself will aid readers with
-understanding this guide.
+For example, the beginning of this guide contains a link to the Documentation
+template, as it relates to this document. Supplying a link to the template
+itself will aid readers with understanding this guide.
## Helpful Information
@@ -207,11 +227,12 @@ understanding this guide.
When writing your documentation, the following links may be useful:
-- Thoth Tech has some general rules on how we would like documentation to be written. These can be
- found
+- Thoth Tech has some general rules on how we would like documentation to be
+ written. These can be found
[here](https://github.com/thoth-tech/handbook/blob/main/docs/processes/documentation/writing-style-guide.md)
-- Thoth Tech also uses a markup language called Markdown to write documentation in. Thoth Tech has a
- document outlining what Markdown is, and how to write it, which can be found
+- Thoth Tech also uses a markup language called Markdown to write documentation
+ in. Thoth Tech has a document outlining what Markdown is, and how to write it,
+ which can be found
[here](https://github.com/thoth-tech/handbook/blob/main/docs/learning/training/markdown-guide.md).
- Thoth Tech's Documentation Template can be found
[here](https://github.com/thoth-tech/documentation/blob/main/docs/OnTrack/Documentation/OnTrack%20Documentation%20Template.md).
diff --git a/src/content/docs/Products/OnTrack/Documentation/Documentation/privacy-policies.md b/src/content/docs/Products/OnTrack/Documentation/Documentation/privacy-policies.md
index 22270301b..f6dfef3fc 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Documentation/privacy-policies.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Documentation/privacy-policies.md
@@ -4,109 +4,126 @@ title: Privacy Policies
### **(A) This Policy**
-This Policy is issued by each of the Controller entities listed in Section P below (" **we**", "
-**us**" or " **our**"). This Policy is addressed to individuals outside our organisation with whom
-we interact, including students, staff (together, " **you** )
+This Policy is issued by each of the Controller entities listed in Section P
+below (" **we**", " **us**" or " **our**"). This Policy is addressed to
+individuals outside our organisation with whom we interact, including students,
+staff (together, " **you** )
-This Policy may be amended or updated from time to time to reflect changes in our practices with
-respect to the Processing of Personal Data, or changes in applicable law. We encourage you to read
-this Policy carefully, and to regularly check this page to review any changes we might make in
-accordance with the terms of this Policy.
+This Policy may be amended or updated from time to time to reflect changes in
+our practices with respect to the Processing of Personal Data, or changes in
+applicable law. We encourage you to read this Policy carefully, and to regularly
+check this page to review any changes we might make in accordance with the terms
+of this Policy.
### **(B) Processing your Personal Data**
-Collection of Personal Data: We collect or obtain Personal Data about you from the following
-sources:
+Collection of Personal Data: We collect or obtain Personal Data about you from
+the following sources:
Data provided to us: University login credentials
Relationship data: In the ordinary course of our relationship with you.
-Website data: required personal information is collected while signup/sign in procedures.
+Website data: required personal information is collected while signup/sign in
+procedures.
-Creation of Personal Data: \*\* \*\* In providing our Services, we may also create Personal Data
-about you, such as records of your interactions with host organisation and details of your
-interaction on our website and we may record your track on tasks given to you.
+Creation of Personal Data: \*\* \*\* In providing our Services, we may also
+create Personal Data about you, such as records of your interactions with host
+organisation and details of your interaction on our website and we may record
+your track on tasks given to you.
-Categories of Personal Data: The categories of Personal Data about you that we Process include:
+Categories of Personal Data: The categories of Personal Data about you that we
+Process include:
-Personal details: names; gender; date of birth / Domestic or International, course details etc.
+Personal details: names; gender; date of birth / Domestic or International,
+course details etc.
Contact details: Deakin email address.
-Course Assessment: Your Assessment task details, your progress on tasks and results of tasks.
+Course Assessment: Your Assessment task details, your progress on tasks and
+results of tasks.
#### **Sensitive Personal Data**
-We do collect your personal details, course details to provide you with the assessment tasks. In
-Processing your Sensitive Personal Data in connection with the purposes set out in this Policy, we
-may rely on one or more of the following legal bases, depending on the circumstances:
+We do collect your personal details, course details to provide you with the
+assessment tasks. In Processing your Sensitive Personal Data in connection with
+the purposes set out in this Policy, we may rely on one or more of the following
+legal bases, depending on the circumstances:
-we have obtained your prior express consent to the Processing (this legal basis is only used in
-relation to Process that is entirely voluntary – it is not used for Processing that is necessary or
-obligatory in any way);
+we have obtained your prior express consent to the Processing (this legal basis
+is only used in relation to Process that is entirely voluntary – it is not used
+for Processing that is necessary or obligatory in any way);
the Processing is required or permitted by applicable law
the Processing is necessary to protect the vital interests of any individual; or
-we have a legitimate interest in carrying out the Processing for the purpose of managing, operating.
+we have a legitimate interest in carrying out the Processing for the purpose of
+managing, operating.
### **(C) Data Security**
-We have implemented appropriate technical and organisational security measures designed to protect
-your Personal Data against accidental or unlawful destruction, loss, alteration, unauthorised
-disclosure, unauthorised access, and other unlawful, or unauthorised forms of Processing, in
-accordance with applicable law.
+We have implemented appropriate technical and organisational security measures
+designed to protect your Personal Data against accidental or unlawful
+destruction, loss, alteration, unauthorised disclosure, unauthorised access, and
+other unlawful, or unauthorised forms of Processing, in accordance with
+applicable law.
-Because the internet is an open system, the transmission of information via the internet is not
-completely secure. Although we implement all reasonable measures to protect your Personal Data, we
-cannot guarantee the security of your data transmitted to us using the internet – any such
-transmission is at your own risk, and you are responsible for ensuring that any Personal Data that
-you send to us are sent securely.
+Because the internet is an open system, the transmission of information via the
+internet is not completely secure. Although we implement all reasonable measures
+to protect your Personal Data, we cannot guarantee the security of your data
+transmitted to us using the internet – any such transmission is at your own
+risk, and you are responsible for ensuring that any Personal Data that you send
+to us are sent securely.
### **(D) Data Accuracy**
We take every reasonable step to ensure that:
-your Personal Data that we Process are accurate and, where necessary, kept up to date; and
+your Personal Data that we Process are accurate and, where necessary, kept up to
+date; and
-Any of your Personal Data that we Process that are inaccurate (having regard to the purposes for
-which they are Processed) are erased or rectified without delay.
+Any of your Personal Data that we Process that are inaccurate (having regard to
+the purposes for which they are Processed) are erased or rectified without
+delay.
### **(E) Data Minimisation**
-We take every reasonable step to ensure that your Personal Data that we Process are limited to the
-Personal Data reasonably required in connection with the purposes set out in this Policy (including
-the provision of Services to you).
+We take every reasonable step to ensure that your Personal Data that we Process
+are limited to the Personal Data reasonably required in connection with the
+purposes set out in this Policy (including the provision of Services to you).
### **(F) Data Retention**
-We take every reasonable step to ensure that your Personal Data is only Processed for the minimum
-period necessary for the purposes set out in this Policy. We retain copies of your Personal Data in
-a form that permits identification only for as long as:
+We take every reasonable step to ensure that your Personal Data is only
+Processed for the minimum period necessary for the purposes set out in this
+Policy. We retain copies of your Personal Data in a form that permits
+identification only for as long as:
-We maintain an ongoing relationship with (for example, when you are using the site for your tasks)
-your Personal Data are necessary in connection with the lawful purposes set out in this Policy
+We maintain an ongoing relationship with (for example, when you are using the
+site for your tasks) your Personal Data are necessary in connection with the
+lawful purposes set out in this Policy
-We receive your consent to store the data for a longer period of time (for example, in the case of
-application documents for your assessment records.).
+We receive your consent to store the data for a longer period of time (for
+example, in the case of application documents for your assessment records.).
### **(G) Your Legal rights**
-Subject to applicable law, you may have several rights regarding the Processing of your Personal
+Subject to applicable law, you may have several rights regarding the Processing
+of your Personal
Data, including:
-The right not to provide your Personal Data to us (however, we are unable to provide you with your
-course assessment tasks until you provide us with your personal and course details)
+The right not to provide your Personal Data to us (however, we are unable to
+provide you with your course assessment tasks until you provide us with your
+personal and course details)
-The right to request access to, or copies of, your Personal Data that we Process, or control,
-together with information regarding the source, purpose and nature of processing and disclosure of
-those Personal Data;
+The right to request access to, or copies of, your Personal Data that we
+Process, or control, together with information regarding the source, purpose and
+nature of processing and disclosure of those Personal Data;
-the right to request rectification of any inaccuracies in your Personal Data that we Process or
-control;
+the right to request rectification of any inaccuracies in your Personal Data
+that we Process or control;
the right to request, on legitimate grounds:
@@ -114,10 +131,11 @@ erasure of your Personal Data that we Process or control; or
restriction of Processing of your Personal Data that we Process or control;
-the right to object, on legitimate grounds, to the Processing of your Personal Data by us, or on our
-behalf;
+the right to object, on legitimate grounds, to the Processing of your Personal
+Data by us, or on our behalf;
-The right to withdraw your consent to Processing, where the lawfulness of processing is based on
-consent (noting that such withdrawal does not affect the lawfulness of any Processing performed
-prior to the date on which we receive notice of such withdrawal, and does not prevent the Processing
-of your Personal Data in reliance upon any other available legal bases);
+The right to withdraw your consent to Processing, where the lawfulness of
+processing is based on consent (noting that such withdrawal does not affect the
+lawfulness of any Processing performed prior to the date on which we receive
+notice of such withdrawal, and does not prevent the Processing of your Personal
+Data in reliance upon any other available legal bases);
diff --git a/src/content/docs/Products/OnTrack/Documentation/Documentation/report-on-data-analytics-tools.md b/src/content/docs/Products/OnTrack/Documentation/Documentation/report-on-data-analytics-tools.md
index a48849245..44a3f1a05 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Documentation/report-on-data-analytics-tools.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Documentation/report-on-data-analytics-tools.md
@@ -4,93 +4,109 @@ title: Research Spike - Report on Data Analytics Tools
## Introduction
-In today's data-driven educational landscape, schools and universities rely on advanced analytics
-tools to gain insights, make informed decisions, and optimize operations. This report offers an
-in-depth comparison of three prominent analytics tools: OctopusBI, Tableau, and Track One. The focus
-is on their features, potential integration with the OnTrack platform, and alignment with the
-specific needs of educational institutions.
+In today's data-driven educational landscape, schools and universities rely on
+advanced analytics tools to gain insights, make informed decisions, and optimize
+operations. This report offers an in-depth comparison of three prominent
+analytics tools: OctopusBI, Tableau, and Track One. The focus is on their
+features, potential integration with the OnTrack platform, and alignment with
+the specific needs of educational institutions.
## OctopusBI
-OctopusBI is a robust business intelligence platform with a range of features designed to facilitate
-data exploration, visualization, and reporting. It offers the following key features:
-
-- Customizable Dashboards and Reports: OctopusBI provides a user-friendly interface for creating and
- customizing dashboards and reports, allowing educational institutions to tailor their data
- representations to specific requirements.
-- Data Integration: OctopusBI supports integration with various data sources, enabling seamless
- connectivity to student information systems, learning management systems, and other relevant
- databases.
-- Interactive Data Visualization: The platform offers interactive charting tools and visualizations,
- enabling stakeholders to explore data trends and insights more effectively.
-- Collaboration and Sharing: OctopusBI facilitates collaboration through commenting and sharing
- functionalities, enhancing communication among faculty and staff members.
+OctopusBI is a robust business intelligence platform with a range of features
+designed to facilitate data exploration, visualization, and reporting. It offers
+the following key features:
+
+- Customizable Dashboards and Reports: OctopusBI provides a user-friendly
+ interface for creating and customizing dashboards and reports, allowing
+ educational institutions to tailor their data representations to specific
+ requirements.
+- Data Integration: OctopusBI supports integration with various data sources,
+ enabling seamless connectivity to student information systems, learning
+ management systems, and other relevant databases.
+- Interactive Data Visualization: The platform offers interactive charting tools
+ and visualizations, enabling stakeholders to explore data trends and insights
+ more effectively.
+- Collaboration and Sharing: OctopusBI facilitates collaboration through
+ commenting and sharing functionalities, enhancing communication among faculty
+ and staff members.
## Tableau
-Tableau is a renowned data visualization and analytics tool known for its user-friendly interface
-and comprehensive capabilities. Key features include:
-
-- Advanced Data Visualization: Tableau's drag-and-drop interface empowers users to create
- sophisticated visualizations, aiding educational institutions in presenting complex data in an
- easily understandable manner.
-- Wide Range of Connectors: Tableau offers an extensive library of connectors, enabling seamless
- data integration from various sources, including cloud platforms, databases, and spreadsheets.
-- Predictive Analytics and Forecasting: Educational institutions can leverage Tableau's advanced
- analytics and forecasting tools to identify trends, patterns, and future scenarios, enhancing
- proactive decision-making.
-- Scalability and Performance: Tableau's scalable architecture ensures that as data volumes grow,
- the system remains efficient and capable of handling increasing demands.
+Tableau is a renowned data visualization and analytics tool known for its
+user-friendly interface and comprehensive capabilities. Key features include:
+
+- Advanced Data Visualization: Tableau's drag-and-drop interface empowers users
+ to create sophisticated visualizations, aiding educational institutions in
+ presenting complex data in an easily understandable manner.
+- Wide Range of Connectors: Tableau offers an extensive library of connectors,
+ enabling seamless data integration from various sources, including cloud
+ platforms, databases, and spreadsheets.
+- Predictive Analytics and Forecasting: Educational institutions can leverage
+ Tableau's advanced analytics and forecasting tools to identify trends,
+ patterns, and future scenarios, enhancing proactive decision-making.
+- Scalability and Performance: Tableau's scalable architecture ensures that as
+ data volumes grow, the system remains efficient and capable of handling
+ increasing demands.
## Track One Studio
-Track One Studio appears to be a comprehensive data analytics and business intelligence platform
-designed to help organizations transform their data into actionable insights. It offers a range of
-features that can benefit educational institutions and other sectors:
+Track One Studio appears to be a comprehensive data analytics and business
+intelligence platform designed to help organizations transform their data into
+actionable insights. It offers a range of features that can benefit educational
+institutions and other sectors:
-- Data Integration: Track One Studio supports data integration from various sources, allowing
- educational institutions to consolidate and analyse data from student information systems,
- academic records, financial databases, and more.
-- Visualization and Dashboards: The platform offers tools for creating interactive dashboards and
- visualizations, enabling users to explore data trends and patterns visually. Customizable
- dashboards can provide insights into student performance, enrolment data, resource allocation, and
+- Data Integration: Track One Studio supports data integration from various
+ sources, allowing educational institutions to consolidate and analyse data
+ from student information systems, academic records, financial databases, and
more.
-- Advanced Analytics: Track One Studio may provide advanced analytics capabilities, such as
- predictive modelling, forecasting, and statistical analysis. These features can help educational
- institutions make data-driven decisions to improve student outcomes and optimize operations.
-- Collaboration and Sharing: The platform seems to facilitate collaboration among stakeholders by
- allowing users to share reports, dashboards, and insights. This can enhance communication and
- decision-making across departments.
-- User-Friendly Interface: Track One Studio's user-friendly interface aims to make data analysis
- accessible to users with varying levels of technical expertise, empowering educators and
- administrators to harness the power of data.
-- Security: Data security is crucial for educational institutions. Track One Studio might offer
- security features to protect sensitive student information and comply with data privacy
- regulations.
-- Customization: Educational institutions often have unique data requirements. Track One Studio may
- provide customization options to tailor the platform to the specific needs of schools and
- universities.
+- Visualization and Dashboards: The platform offers tools for creating
+ interactive dashboards and visualizations, enabling users to explore data
+ trends and patterns visually. Customizable dashboards can provide insights
+ into student performance, enrolment data, resource allocation, and more.
+- Advanced Analytics: Track One Studio may provide advanced analytics
+ capabilities, such as predictive modelling, forecasting, and statistical
+ analysis. These features can help educational institutions make data-driven
+ decisions to improve student outcomes and optimize operations.
+- Collaboration and Sharing: The platform seems to facilitate collaboration
+ among stakeholders by allowing users to share reports, dashboards, and
+ insights. This can enhance communication and decision-making across
+ departments.
+- User-Friendly Interface: Track One Studio's user-friendly interface aims to
+ make data analysis accessible to users with varying levels of technical
+ expertise, empowering educators and administrators to harness the power of
+ data.
+- Security: Data security is crucial for educational institutions. Track One
+ Studio might offer security features to protect sensitive student information
+ and comply with data privacy regulations.
+- Customization: Educational institutions often have unique data requirements.
+ Track One Studio may provide customization options to tailor the platform to
+ the specific needs of schools and universities.
## Integration with OnTrack
-The successful integration of an analytics tool with the OnTrack platform is crucial for maximizing
-data utilization in educational institutions. Integration considerations include data compatibility,
-API availability, and seamless workflow alignment. Each tool's integration potential with OnTrack
-needs to be assessed based on the platform's requirements and APIs provided by the analytics tools.
+The successful integration of an analytics tool with the OnTrack platform is
+crucial for maximizing data utilization in educational institutions. Integration
+considerations include data compatibility, API availability, and seamless
+workflow alignment. Each tool's integration potential with OnTrack needs to be
+assessed based on the platform's requirements and APIs provided by the analytics
+tools.
## What Schools and Universities Seek
-Educational institutions commonly seek the following attributes in a data analytics system:
+Educational institutions commonly seek the following attributes in a data
+analytics system:
-- Data Consolidation: The ability to integrate data from disparate sources to provide a
- comprehensive view.
-- Customization: Tailoring dashboards and reports to meet the unique needs of different departments
- and stakeholders.
-- User-Friendly Interface: Intuitive tools that empower non-technical users to explore data and
- generate insights.
-- Predictive Capabilities: Tools for predictive modelling and forecasting to support data-driven
- decision-making.
-- Security: Robust security measures to protect sensitive student and institutional data.
+- Data Consolidation: The ability to integrate data from disparate sources to
+ provide a comprehensive view.
+- Customization: Tailoring dashboards and reports to meet the unique needs of
+ different departments and stakeholders.
+- User-Friendly Interface: Intuitive tools that empower non-technical users to
+ explore data and generate insights.
+- Predictive Capabilities: Tools for predictive modelling and forecasting to
+ support data-driven decision-making.
+- Security: Robust security measures to protect sensitive student and
+ institutional data.
## Comparison Table: OctopusBI vs. Tableau vs. Track One Studio
@@ -108,10 +124,12 @@ Educational institutions commonly seek the following attributes in a data analyt
## Conclusion
-After analyzing the features and capabilities of OctopusBI, Tableau, and Track One Studio, it's
-evident that each tool offers distinct advantages that can benefit educational institutions seeking
-to leverage data analytics for better decision-making.
+After analyzing the features and capabilities of OctopusBI, Tableau, and Track
+One Studio, it's evident that each tool offers distinct advantages that can
+benefit educational institutions seeking to leverage data analytics for better
+decision-making.
-When considering integration with the OnTrack platform, it's essential to assess the compatibility
-and integration options provided by each tool. Additionally, budget, user preferences, and specific
-institutional requirements should be taken into account when making a decision.
+When considering integration with the OnTrack platform, it's essential to assess
+the compatibility and integration options provided by each tool. Additionally,
+budget, user preferences, and specific institutional requirements should be
+taken into account when making a decision.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Documentation/spelling-and-grammar-template.md b/src/content/docs/Products/OnTrack/Documentation/Documentation/spelling-and-grammar-template.md
index f0950a483..8db5ed550 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Documentation/spelling-and-grammar-template.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Documentation/spelling-and-grammar-template.md
@@ -21,8 +21,9 @@ title: Spelling and Grammar Errors Template
- Documentation Title: Spelling and Grammar Errors Template
- Documentation Type: Informative
- Documentation Information Summary:
- - This document is a template teams can use to both identify and fix spelling and grammar issues
- in documentation related to or handled by their respective team.
+ - This document is a template teams can use to both identify and fix spelling
+ and grammar issues in documentation related to or handled by their
+ respective team.
### Document Review Information
@@ -38,11 +39,11 @@ title: Spelling and Grammar Errors Template
---
- Markdown: markup language used by Thoth Tech in documentation creation.
-- GitHub: online repository system where Thoth Tech keeps their data and information, including
- documentation.
-- Prettier, Vale: tools used to ensure documentation written meets required standards. The
- installation guide in the Key Links/Resources section below explains more about these
- technologies.
+- GitHub: online repository system where Thoth Tech keeps their data and
+ information, including documentation.
+- Prettier, Vale: tools used to ensure documentation written meets required
+ standards. The installation guide in the Key Links/Resources section below
+ explains more about these technologies.
### Key Links/Resources
@@ -75,12 +76,13 @@ See Key Links/Resources section above.
---
-This template has been provided as a way for your team to both log and fix spelling and grammar
-errors. These errors are those which you and your team find in documentation that your team is
-either responsible for, or which relates to your team. For example, let's say a particular team's
-document has an error. The error should be added to that team's version of this template. This way,
-the team responsible for the document can be made aware of the error. The team which the document
-relates to can then evaluate whether it must be fixed or not.
+This template has been provided as a way for your team to both log and fix
+spelling and grammar errors. These errors are those which you and your team find
+in documentation that your team is either responsible for, or which relates to
+your team. For example, let's say a particular team's document has an error. The
+error should be added to that team's version of this template. This way, the
+team responsible for the document can be made aware of the error. The team which
+the document relates to can then evaluate whether it must be fixed or not.
### How To Create Your New Spelling And Grammar File
@@ -88,43 +90,48 @@ relates to can then evaluate whether it must be fixed or not.
The following steps outline how this document can be set up for your team.
-**NOTE:** As this is a template, when **_{Insert Team Here}_** appears, replace it with your team
-name/area. For example, a team named "Critical Issues" or whose area is "Critical Issues" would
-replace "**_{Insert Team Here}_**" with "Critical Issues".
-
-1. Create an empty Markdown file in a space your team can access. This should be named something
- like "**_{Insert Team Here}_** Spelling and Grammar Errors."
-2. Copy this template into the new file, _starting from the_ "**_{Insert Team Here}_** Spelling and
- Grammar Errors List" _heading below this section._ Information on how to contribute to the
- template is also included below as well.
-3. Certain sections below contain the text "**_{Insert Team Here}_**". Ensure these are replaced
- with your team's name and/or section.
+**NOTE:** As this is a template, when **_{Insert Team Here}_** appears, replace
+it with your team name/area. For example, a team named "Critical Issues" or
+whose area is "Critical Issues" would replace "**_{Insert Team Here}_**" with
+"Critical Issues".
+
+1. Create an empty Markdown file in a space your team can access. This should be
+ named something like "**_{Insert Team Here}_** Spelling and Grammar Errors."
+2. Copy this template into the new file, _starting from the_ "**_{Insert Team
+ Here}_** Spelling and Grammar Errors List" _heading below this section._
+ Information on how to contribute to the template is also included below as
+ well.
+3. Certain sections below contain the text "**_{Insert Team Here}_**". Ensure
+ these are replaced with your team's name and/or section.
4. That's all. You now have a document in which your team can add errors to.
### Further Steps **_ONLY For Teams Working With OnTrack_**
---
-For teams **working with OnTrack**, Thoth Tech has a special template for OnTrack Documentation.
-This template helps your reader understand your work by concentrating all of the important
-information of your document in one place. This allows your reader to get the most important
-information out of your documentation as quickly as possible.
+For teams **working with OnTrack**, Thoth Tech has a special template for
+OnTrack Documentation. This template helps your reader understand your work by
+concentrating all of the important information of your document in one place.
+This allows your reader to get the most important information out of your
+documentation as quickly as possible.
-To incorporate the OnTrack Documentation Template into your new Spelling and Grammar file, follow
-these steps.
+To incorporate the OnTrack Documentation Template into your new Spelling and
+Grammar file, follow these steps.
-1. After copying the below sections of the Spelling and Grammar template into your new file, open
- the
+1. After copying the below sections of the Spelling and Grammar template into
+ your new file, open the
[OnTrack Documentation Template](https://github.com/thoth-tech/documentation/blob/main/docs/OnTrack/Documentation/OnTrack%20Documentation%20Template.md).
-2. Copy the contents of the OnTrack Documentation Template into your new Spelling and Grammar file.
- The contents of the OnTrack Documentation Template should come before (precede) the contents of
- the Spelling and Grammar Template in your new file.
-3. Fill out the relevant sections of the OnTrack Documentation Template in your new Spelling and
- Grammar file.
+2. Copy the contents of the OnTrack Documentation Template into your new
+ Spelling and Grammar file. The contents of the OnTrack Documentation Template
+ should come before (precede) the contents of the Spelling and Grammar
+ Template in your new file.
+3. Fill out the relevant sections of the OnTrack Documentation Template in your
+ new Spelling and Grammar file.
- Thoth Tech has an
[OnTrack Documentation Template Guide](https://github.com/thoth-tech/documentation/blob/main/docs/OnTrack/Documentation/OnTrack-Documentation-Template-Guide.md),
- which explains how to do this. It also explains in more detail why the template is used, and
- contains some links on writing documentation for Thoth Tech.
+ which explains how to do this. It also explains in more detail why the
+ template is used, and contains some links on writing documentation for
+ Thoth Tech.
## **_{Insert Team Here}_** Spelling and Grammar Errors List
@@ -132,13 +139,13 @@ these steps.
---
-The purpose of this document is to form a list wherein spelling and grammar issues in current
-**_{Insert Team Here}_** documentation can be identified. They can then be subsequently assessed,
-and if need be, fixed.
+The purpose of this document is to form a list wherein spelling and grammar
+issues in current **_{Insert Team Here}_** documentation can be identified. They
+can then be subsequently assessed, and if need be, fixed.
-Fixing these spelling and grammar mistakes increases the effectiveness and efficiency of the
-documentation in question. This erases the potential for confusion as a result of any erroneous
-words or phrases.
+Fixing these spelling and grammar mistakes increases the effectiveness and
+efficiency of the documentation in question. This erases the potential for
+confusion as a result of any erroneous words or phrases.
## Data Required For Each Entry
@@ -147,49 +154,54 @@ words or phrases.
The following data points are recorded for each entry:
- Document Name
- - The document name allows the reader to understand in which file you have found an error, or
- something that must be checked over.
+ - The document name allows the reader to understand in which file you have
+ found an error, or something that must be checked over.
- Document File Path (in GitHub)
- - Most all of Thoth Tech's documentation can be found in it's GitHub repositories. Including the
- document file path helps the reader find the document quickly, instead of searching through all
- of Thoth Tech's GitHub repositories for it. The reader could instead follow this file path to
- identify where the document is that you are referring to. For example, the file path of the
- OnTrack Documentation is: _documentation/docs/OnTrack/Documentation_.
+ - Most all of Thoth Tech's documentation can be found in it's GitHub
+ repositories. Including the document file path helps the reader find the
+ document quickly, instead of searching through all of Thoth Tech's GitHub
+ repositories for it. The reader could instead follow this file path to
+ identify where the document is that you are referring to. For example, the
+ file path of the OnTrack Documentation is:
+ _documentation/docs/OnTrack/Documentation_.
- Erroneous Words/Phrases
- - Listing the erroneous words or phrases your entry refers to informs the reader as to where the
- problems are in the document.
+ - Listing the erroneous words or phrases your entry refers to informs the
+ reader as to where the problems are in the document.
- Corrective Suggestions
- - Listing these indicates to your reader why you think the mentioned words and/or phrases must be
- changed. They can then take these suggestions into consideration when proofreading the document.
+ - Listing these indicates to your reader why you think the mentioned words
+ and/or phrases must be changed. They can then take these suggestions into
+ consideration when proofreading the document.
## How This Document Is Formatted
---
-Entries into this document are made through the use of tables written in Markdown. The table appears
-as shown below:
+Entries into this document are made through the use of tables written in
+Markdown. The table appears as shown below:
| Document Name | Document File Path (in GitHub) | Erroneous Words/Phrases | Corrective Suggestions |
| ------------- | ------------------------------ | ----------------------------------- | ----------------------------------------------------------- |
| "Example-1" | documents/Example-1 | "...their is a document called..." | In this context, "there" should be used instead of "their". |
| "Example-2" | documents/Example-2 | "... When's the meeting today> ..." | ">" should be replaced with "?". |
-For further information on Markdown, what it is and how to use it, Thoth Tech has a
+For further information on Markdown, what it is and how to use it, Thoth Tech
+has a
[Markdown Guide](https://github.com/thoth-tech/handbook/blob/main/docs/learning/training/markdown-guide.md).
### Adding A New Entry
---
-To add an entry to an existing table, each bit of data you enter should be separated by a '|'. For
-example, the first entry of the above table, in Markdown, is therefore written like this:
+To add an entry to an existing table, each bit of data you enter should be
+separated by a '|'. For example, the first entry of the above table, in
+Markdown, is therefore written like this:
-> | "Example-1" | documents/Example-1 | "...their is a document called..." | In this context,
-> "there" should be used instead of "their". |
+> | "Example-1" | documents/Example-1 | "...their is a document called..." | In
+> this context, "there" should be used instead of "their". |
-As you can see, each column's data in the entry is separated by a "|". A "|" is also needed on each
-end of the table as well. In the above example, this is before "Example-1" and after "Meeting is
-spelt incorrectly".
+As you can see, each column's data in the entry is separated by a "|". A "|" is
+also needed on each end of the table as well. In the above example, this is
+before "Example-1" and after "Meeting is spelt incorrectly".
## Found Spelling and Grammar Instances In GitHub
@@ -199,25 +211,27 @@ spelt incorrectly".
---
-- To save readers having to go in between many different files when looking through this list,
- _order entries by document_.
-- Tables written in Markdown do not allow dot points in their data. Therefore, if more than one
- corrective suggestion is written, these must be numbered to help the reader differentiate them.
- For example: 1. First suggestion. 2. Second suggestion.
-- Where corrective suggestions are small changes, for example, the changing of one letter/character,
- highlighting the changes in your suggestions may better help readers understand them. This can be
- done through the use of italics or bold text.
- - _Italics_ can be written by including one asterisk ("\*") on either side of the text you wish to
- be in italics.
- - **Bold** text can be written by including two asterisks ("\*\*") on either side of the text you
- wish to be in bold.
-- Ensure entries are written so that any reader, regardless of their level of contextual knowledge,
- could understand what your suggestions mean.
+- To save readers having to go in between many different files when looking
+ through this list, _order entries by document_.
+- Tables written in Markdown do not allow dot points in their data. Therefore,
+ if more than one corrective suggestion is written, these must be numbered to
+ help the reader differentiate them. For example: 1. First suggestion. 2.
+ Second suggestion.
+- Where corrective suggestions are small changes, for example, the changing of
+ one letter/character, highlighting the changes in your suggestions may better
+ help readers understand them. This can be done through the use of italics or
+ bold text.
+ - _Italics_ can be written by including one asterisk ("\*") on either side of
+ the text you wish to be in italics.
+ - **Bold** text can be written by including two asterisks ("\*\*") on either
+ side of the text you wish to be in bold.
+- Ensure entries are written so that any reader, regardless of their level of
+ contextual knowledge, could understand what your suggestions mean.
- Thoth Tech uses a **set of rules** when **writing documentation**. The
[Writing Style Guide](https://github.com/thoth-tech/handbook/blob/main/docs/processes/documentation/writing-style-guide.md)
outlines these.
-- Thoth Tech also uses the tools "Prettier" and "Vale" for **writing documentation**. Thoth Tech has
- an
+- Thoth Tech also uses the tools "Prettier" and "Vale" for **writing
+ documentation**. Thoth Tech has an
[Installation Guide](https://github.com/thoth-tech/handbook/blob/main/docs/learning/useful-resources/setup-prettier-and-vale.md)
for both of these technologies.
@@ -225,14 +239,15 @@ spelt incorrectly".
---
-- If you are not a member of the team to which this list belongs to, ensure you have their
- permission **_before_** editing documentation.
- - You may not understand their workings and could hinder their progress if you make unannounced
- and/or unverified changes. Changing their work without their knowledge could have unforesee
- consequences.
- - What you perceive as spelling and/or grammar errors in a document may be correct in the context
- of that document. It would be beneficial to ask whether something is an error, instead of
- assuming it is and editing the document.
+- If you are not a member of the team to which this list belongs to, ensure you
+ have their permission **_before_** editing documentation.
+ - You may not understand their workings and could hinder their progress if you
+ make unannounced and/or unverified changes. Changing their work without
+ their knowledge could have unforesee consequences.
+ - What you perceive as spelling and/or grammar errors in a document may be
+ correct in the context of that document. It would be beneficial to ask
+ whether something is an error, instead of assuming it is and editing the
+ document.
- Once an error is fixed and verified as correct, delete it from this list.
### List of Known Instances
diff --git a/src/content/docs/Products/OnTrack/Documentation/Documentation/spike-frontend-documentation-investigation.md b/src/content/docs/Products/OnTrack/Documentation/Documentation/spike-frontend-documentation-investigation.md
index 3c75fa826..e448a6274 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Documentation/spike-frontend-documentation-investigation.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Documentation/spike-frontend-documentation-investigation.md
@@ -10,9 +10,9 @@ title: Spike Outcomes
## Goal / Deliverables
-- The goal is to create a document which outlines desire tools and format of document that can be
- used to document angular component and service, which can be used for improving developer
- efficiency.
+- The goal is to create a document which outlines desire tools and format of
+ document that can be used to document angular component and service, which can
+ be used for improving developer efficiency.
- Add any documentation tasks in the Frontend Documentation Backlog.
@@ -30,42 +30,52 @@ title: Spike Outcomes
## Tasks undertaken
-- Have basic knowledge of Angular and what its main elements are and how they interact with other
- applications within the system.
+- Have basic knowledge of Angular and what its main elements are and how they
+ interact with other applications within the system.
-- Learn how to use JSDoc comment in angular application. Know that JSDoc comment is a standard way
- to document code in TypeScript.
+- Learn how to use JSDoc comment in angular application. Know that JSDoc comment
+ is a standard way to document code in TypeScript.
-- Install Compodoc into VsCode application. Installation guide can be found on google.
+- Install Compodoc into VsCode application. Installation guide can be found on
+ google.
## What we found out
-- We found many ways that angular can be documented to improve developer experience and efficiency.
- However, we ended up picking only two applications that can help us document the whole system.
+- We found many ways that angular can be documented to improve developer
+ experience and efficiency. However, we ended up picking only two applications
+ that can help us document the whole system.
-- JSDoc Comments can be used into TypeScript code to provide better understanding to developers on
- how to use components that are already in place or even how to extend them. If we want to create
- JSDoc comment we can use ‘/\*\*’ and press entry, which indicated the opening delimiter for the
- comment. ‘\*’ indicates that the comment is part of the documentation. The actual documentation
+- JSDoc Comments can be used into TypeScript code to provide better
+ understanding to developers on how to use components that are already in place
+ or even how to extend them. If we want to create JSDoc comment we can use
+ ‘/\*\*’ and press entry, which indicated the opening delimiter for the
+ comment. ‘\*’ indicates that the comment is part of the documentation. The
+ actual documentation
- happens by using some of the common JSDoc tags like '@returns', '@private' and many more. Lastly,
- we will close and mark the end of comment by using ‘\*’ symbol.
+ happens by using some of the common JSDoc tags like '@returns', '@private' and
+ many more. Lastly, we will close and mark the end of comment by using ‘\*’
+ symbol.
-1. Download compodoc in your system by running this command: npm install -g @compodoc/compodoc into
- VsCode terminal.
+1. Download compodoc in your system by running this command: npm install -g
+ @compodoc/compodoc into VsCode terminal.
1. Configure compodoc by making tsconfig.docs.json file into the root directory.
-1. In order to generate a document, we are going to use ‘compodoc -p sconfig.docs.json -s’ command.
+1. In order to generate a document, we are going to use ‘compodoc -p
+ sconfig.docs.json -s’ command.
-1. After the installation is done, it would generate a URL where we can access the documentation.
+1. After the installation is done, it would generate a URL where we can access
+ the documentation.
-1. After we are doing with installing the application, we can view the documentation by visiting
+1. After we are doing with installing the application, we can view the
+ documentation by visiting
or what the URL was generated in the terminal.
-- If we were to implement compodoc into our project, it would help us to generate documentation for
- modules, component, routes, any directives, pips, interface and many more.
+- If we were to implement compodoc into our project, it would help us to
+ generate documentation for modules, component, routes, any directives, pips,
+ interface and many more.
-- So, the main idea behind is that we would first write JSDoc comment in our TypeScript code and
- later create a document using compodoc which can be shared very easily among teams and developers.
+- So, the main idea behind is that we would first write JSDoc comment in our
+ TypeScript code and later create a document using compodoc which can be shared
+ very easily among teams and developers.
diff --git a/src/content/docs/Products/OnTrack/Documentation/File Submission Enhancements/project-summary-document.md b/src/content/docs/Products/OnTrack/Documentation/File Submission Enhancements/project-summary-document.md
index 26c7a94fa..0885585e5 100644
--- a/src/content/docs/Products/OnTrack/Documentation/File Submission Enhancements/project-summary-document.md
+++ b/src/content/docs/Products/OnTrack/Documentation/File Submission Enhancements/project-summary-document.md
@@ -12,9 +12,10 @@ title: OnTrack Word Document Submission Tea
- Documentation Title: OnTrack Word Document Submission Team
- Documentation Type: Technical
-- Documentation Information Summary: Document detailing the original project aim, new project aim
- for our trimester, information on the two containers researched, performance KPI's and evaluating
- results, and a future students section for important information.
+- Documentation Information Summary: Document detailing the original project
+ aim, new project aim for our trimester, information on the two containers
+ researched, performance KPI's and evaluating results, and a future students
+ section for important information.
## Document Review Information
@@ -38,26 +39,30 @@ See .
## Project Aim
\
-Overall, the aim of this project is to extend the Doubtfire (OnTrack) Learning Management System by
-allowing students to upload word documents against a task submission. \
+Overall, the aim of this project is to extend the Doubtfire (OnTrack) Learning
+Management System by allowing students to upload word documents against a task
+submission. \
\
-Word documents must be converted to the Portable Document Format (PDF) so it can be made accessible
-to the tutors and unit convenors for assessment. \
+Word documents must be converted to the Portable Document Format (PDF) so it can
+be made accessible to the tutors and unit convenors for assessment. \
\
Currently, Doubtfire only supports code, PDF and image task submissions.
## Project Aim for Trimester 3, 2022
\
-This trimester, our aim is to research, compare and decide on the technology implementation that
-will be used to convert the word document submissions to PDF. \
-As a number of the team members are new to the capstone program, and/or new to the Doubtfire
-architecture, a significant amount of time is dedicated to research and upskilling.
+This trimester, our aim is to research, compare and decide on the technology
+implementation that will be used to convert the word document submissions to
+PDF. \
+As a number of the team members are new to the capstone program, and/or new to
+the Doubtfire architecture, a significant amount of time is dedicated to
+research and upskilling.
## Introduction
-This trimester, we have focused on researching and comparing two Docker images which can be used for
-converting word documents to PDFs; Pandoc and LibreOffice Writer. Specifically, these images are: \
+This trimester, we have focused on researching and comparing two Docker images
+which can be used for converting word documents to PDFs; Pandoc and LibreOffice
+Writer. Specifically, these images are: \
\
- pandoc/latex:2.17
@@ -66,59 +71,66 @@ converting word documents to PDFs; Pandoc and LibreOffice Writer. Specifically,
## Pandoc
\
- is a Command Line Interface (CLI) document converter which supports an
-impressive number of conversion input and output permutations, including word processing file
-extensions such as .docx, .rtf and .odt. Pandoc is well documented and easy to use. Pandoc CLI is
-available as a Docker image, so it can be executed in isolation or installed natively within an
-existing container.
+ is a Command Line Interface (CLI) document converter which
+supports an impressive number of conversion input and output permutations,
+including word processing file extensions such as .docx, .rtf and .odt. Pandoc
+is well documented and easy to use. Pandoc CLI is available as a Docker image,
+so it can be executed in isolation or installed natively within an existing
+container.
## LibreOffice
\
- is an open source productivity suite which contains a number of
-standalone applications which are used to author documents, spreadsheets and presentations. While
-working with LibreOffice typically involves a graphical user interface, headless interactions are
-possible via the CLI. Being an open sourced software, many implementations or flavours are available
-on the internet. Because of this, we found it challenging to navigate and find documentation that
-best fit our use case. LibreOffice CLI is also available in a Docker image.
+ is an open source productivity suite which
+contains a number of standalone applications which are used to author documents,
+spreadsheets and presentations. While working with LibreOffice typically
+involves a graphical user interface, headless interactions are possible via the
+CLI. Being an open sourced software, many implementations or flavours are
+available on the internet. Because of this, we found it challenging to navigate
+and find documentation that best fit our use case. LibreOffice CLI is also
+available in a Docker image.
## Performance
\
-In an effort to make an informed decision on which technology we should utilise going forward, we
-had established three Key Performance Indicators (KPIs) in which we evaluated each technology
-against. \
+In an effort to make an informed decision on which technology we should utilise
+going forward, we had established three Key Performance Indicators (KPIs) in
+which we evaluated each technology against. \
\
- Speed; the time it took for a conversion to occur.
- Quality; the condition of the conversion output in comparison to the input.
- Footprint; the size of the technology implementation. \
\
- In order to measure the technologies against the KPIs mentioned above, we had created a
- benchmarking tool in Python/Jupyter Notebook which allowed us to perform the evaluation. \
+ In order to measure the technologies against the KPIs mentioned above, we had
+ created a benchmarking tool in Python/Jupyter Notebook which allowed us to
+ perform the evaluation. \
\
- As a base case (it can be easily extended), the benchmarking tool converts three different sized
- ".docx" files, being 100kb, 500kb and 1MB, to ".pdf" and compares the time it took for each of the
- technologies. Whereby each file is converted three times so we can report on the best, worst and
- average cases. \
+ As a base case (it can be easily extended), the benchmarking tool converts
+ three different sized ".docx" files, being 100kb, 500kb and 1MB, to ".pdf" and
+ compares the time it took for each of the technologies. Whereby each file is
+ converted three times so we can report on the best, worst and average cases. \
\ The benchmarking tool and its outputs are available at “./Performance
Benchmarking/runner.ipynb”.
## Performance Conclusions
\
-More detail is available in the benchmarking tool itself, a summary is provided below. \
+More detail is available in the benchmarking tool itself, a summary is provided
+below. \
\
 \
\
-- \*An average of three conversions of a 1MB .docx file. Speed should be used as an indicator only,
- this largely depends on the available resources on the executing machine. \
+- \*An average of three conversions of a 1MB .docx file. Speed should be used as
+ an indicator only, this largely depends on the available resources on the
+ executing machine. \
\
Notes on Quality \
\
- Quality is a subjective KPI, and can’t be effectively measured using a discrete value. In this
- case, we are using quality as a measurement of likeness between the input and the output. \
+ Quality is a subjective KPI, and can’t be effectively measured using a
+ discrete value. In this case, we are using quality as a measurement of
+ likeness between the input and the output. \
\
See below for comments regarding conversion quality. \
\
@@ -127,50 +139,59 @@ More detail is available in the benchmarking tool itself, a summary is provided
## Future Students
\
-For future students, reflect on the overall project aim, and start thinking in which creative ways
-you can help to contribute to this project team. Ultimately by integrating the chosen container into
-OnTrack successfully, the original project aim will be complete. \
+For future students, reflect on the overall project aim, and start thinking in
+which creative ways you can help to contribute to this project team. Ultimately
+by integrating the chosen container into OnTrack successfully, the original
+project aim will be complete. \
\
-A project which was new to the whole team during trimester 3 2022, as well as being condensed into a
-very short period of time, was ultimately what allowed us to achieve what we could and overcoming
-challenges. With these resources being created with the intention to serve great future reference to
-future students and try help them get a head start as much as possible, it is important to take some
-time and read/understand all the information provided and available. \
+A project which was new to the whole team during trimester 3 2022, as well as
+being condensed into a very short period of time, was ultimately what allowed us
+to achieve what we could and overcoming challenges. With these resources being
+created with the intention to serve great future reference to future students
+and try help them get a head start as much as possible, it is important to take
+some time and read/understand all the information provided and available. \
\
-So have a read of some of the important project information below, and hopefully clears up a lot of
-questions you would’ve had, to give a head start into the Capstone unit. \
+So have a read of some of the important project information below, and hopefully
+clears up a lot of questions you would’ve had, to give a head start into the
+Capstone unit. \
\
-**First Steps:** understand the structure of OnTrack, what it is, and how it works. \
+**First Steps:** understand the structure of OnTrack, what it is, and how it
+works. \
\
-In brief, Doubtfire (OnTrack) can be described as a “modern, lightweight LMS” (Learning Management
-System) that helps students submit work and receive feedback on it. \
+In brief, Doubtfire (OnTrack) can be described as a “modern, lightweight LMS”
+(Learning Management System) that helps students submit work and receive
+feedback on it. \
\
 \
\
-When looking at the OnTrack Architecture diagram, the red box, representing the API, is where most
-of our work will be in, as it is mainly backend work that needs to be done. \
+When looking at the OnTrack Architecture diagram, the red box, representing the
+API, is where most of our work will be in, as it is mainly backend work that
+needs to be done. \
\
-Doubtfire’s API is done in an open-source framework known as Ruby on Rails. Rails is written in Ruby
-and provides default structures for a database and a web page. \
+Doubtfire’s API is done in an open-source framework known as Ruby on Rails.
+Rails is written in Ruby and provides default structures for a database and a
+web page. \
\
-If you’re not familiar with the language or framework, learning and upskilling in these areas can be
-included in the hours dedicated to upskilling. \
+If you’re not familiar with the language or framework, learning and upskilling
+in these areas can be included in the hours dedicated to upskilling. \
\
-**NOTE:** minimum 30 hours of upskilling required, as explained by the directors of the capstone
-unit. Could be subject to change for future trimesters, so please check with unit team first. \
-\
-**Deploy OnTrack Locally:** should be your priority if you are not a returning student to the
-project. The link below will include three markdown files explaining everything you need to do to
-successfully set Doubtfire up on your local computer, and so that you can get started working on it.
+**NOTE:** minimum 30 hours of upskilling required, as explained by the directors
+of the capstone unit. Could be subject to change for future trimesters, so
+please check with unit team first. \
\
+**Deploy OnTrack Locally:** should be your priority if you are not a returning
+student to the project. The link below will include three markdown files
+explaining everything you need to do to successfully set Doubtfire up on your
+local computer, and so that you can get started working on it. \
\
**Link:**
\
\
-**Direction Moving Forwards & Deliverables:** now that you have Doubtfire setup and working locally,
-you can start thinking about solutions and ways in which to contribute. \
+**Direction Moving Forwards & Deliverables:** now that you have Doubtfire setup
+and working locally, you can start thinking about solutions and ways in which to
+contribute. \
\
-I would recommend now that you have understood a bit of the structure of OnTrack, to then understand
-how the API works, and finding the related code for the API to be able to start working on that. The
-two kind of go hand in hand, understanding the backend of OnTrack’s API, alongside the overall
-structure.
+I would recommend now that you have understood a bit of the structure of
+OnTrack, to then understand how the API works, and finding the related code for
+the API to be able to start working on that. The two kind of go hand in hand,
+understanding the backend of OnTrack’s API, alongside the overall structure.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Deploy OnTrack/docker-compose-with-wsl2.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Deploy OnTrack/docker-compose-with-wsl2.md
index 07e5a277c..d1b96fbac 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Deploy OnTrack/docker-compose-with-wsl2.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Deploy OnTrack/docker-compose-with-wsl2.md
@@ -6,9 +6,10 @@ title: Docker Compose with WSL2
## How to Enable Windows Subsystem for Linux
-If you're facing problems installing Linux bash shell on Windows 10, one of the issues might be
-you've not enabled Windows Subsystem for Linux. If that's the case, you'll bump into an error: "The
-Windows Subsystem for Linux optional component is not enabled. Please enable it and try again."
+If you're facing problems installing Linux bash shell on Windows 10, one of the
+issues might be you've not enabled Windows Subsystem for Linux. If that's the
+case, you'll bump into an error: "The Windows Subsystem for Linux optional
+component is not enabled. Please enable it and try again."
Here's how to enable Windows Subsystem for Linux component in Windows 10:
@@ -17,8 +18,10 @@ Here's how to enable Windows Subsystem for Linux component in Windows 10:
3. Click Programs and Features under the Related settings section on the right.

-4. Under the Programs and Features page, click Turn Windows features on or off on the left panel.
-5. Scroll down and enable Windows Subsystem for Linux. 
+4. Under the Programs and Features page, click Turn Windows features on or off
+ on the left panel.
+5. Scroll down and enable Windows Subsystem for Linux.
+ 
6. Click OK to save your changes.
7. Hit Restart now to finish the process.
@@ -30,7 +33,8 @@ wsl --install -d ubuntu
### **Upgrade version from WSL 1 to WSL 2**
-To see whether your Linux distribution is set to WSL 1 or WSL 2, use the command:
+To see whether your Linux distribution is set to WSL 1 or WSL 2, use the
+command:
```console
wsl -l -v
@@ -42,18 +46,18 @@ To change versions, use the command:
wsl --set-version 2
```
-Once installed, you can either launch the application directly from the store or search for Ubuntu
-in your Windows search bar.
+Once installed, you can either launch the application directly from the store or
+search for Ubuntu in your Windows search bar.

-Once Ubuntu has finished its initial setup you will need to create a username and password (this
-does not need to match your Windows user credentials).
+Once Ubuntu has finished its initial setup you will need to create a username
+and password (this does not need to match your Windows user credentials).

-Finally, it’s always good practice to install the latest updates with the following commands,
-entering your password when prompted.
+Finally, it’s always good practice to install the latest updates with the
+following commands, entering your password when prompted.
```console
sudo apt update
@@ -66,17 +70,19 @@ sudo apt-get install net-tools (windows/linux installation)

1. From the Docker menu, select Settings > General.
-2. Select the Use WSL 2 based engine check box. _If you have installed Docker Desktop on a system
- that supports WSL 2, this option will be enabled by default._
+2. Select the Use WSL 2 based engine check box. _If you have installed Docker
+ Desktop on a system that supports WSL 2, this option will be enabled by
+ default._
3. Click Apply & Restart.
## Converting WSL 1 Operating Systems to WSL 2 on Windows
-If you are using WSL1 You will need Windows 10 build 18917 or higher to be able to use WSL 2. Please
-note, you will need to have the Powershell
+If you are using WSL1 You will need Windows 10 build 18917 or higher to be able
+to use WSL 2. Please note, you will need to have the Powershell
-Administrator window up. If you are converting WSL 1 to WSL 2 I’d assume you have Linux Subsystem
-for Windows installed. If not, the following command will install it for you.
+Administrator window up. If you are converting WSL 1 to WSL 2 I’d assume you
+have Linux Subsystem for Windows installed. If not, the following command will
+install it for you.
```console
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
@@ -88,15 +94,16 @@ Once you do that you will need to run:
Enable-WindowsOptionalFeature -Online -FeatureName VirtualMachinePlatform
```
-Now you should be able to run, substitute "Distro" with your specific distribution. You can get a
-list by using the command:
+Now you should be able to run, substitute "Distro" with your specific
+distribution. You can get a list by using the command:
```console
wsl --list --verbose
wsl --set-version 2
```
-For setting all future distributions to use WSL 2, you will need to use the following command:
+For setting all future distributions to use WSL 2, you will need to use the
+following command:
```console
wsl --set-default-version 2
@@ -108,8 +115,8 @@ Now the last step is to verify your changes worked:
wsl --list --verbose
```
-Then you can now follow the standard step for the git clone and docker compose. For development in
-the project, run:
+Then you can now follow the standard step for the git clone and docker compose.
+For development in the project, run:
```console
code .
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Deploy OnTrack/docker-setup-tutorial.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Deploy OnTrack/docker-setup-tutorial.md
index 8ce3fd35e..e60e2dc60 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Deploy OnTrack/docker-setup-tutorial.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Deploy OnTrack/docker-setup-tutorial.md
@@ -14,21 +14,23 @@ title: Docker Setup Tutorial
1. Fork **doubtfire-deploy:development**, **doubtfire-api:development**, and
**doubtfire-web:development** from
-2. Clone your doubtfire-deploy. Make sure to fetch submodules to get the subprojects.
+2. Clone your doubtfire-deploy. Make sure to fetch submodules to get the
+ subprojects.
```console
git clone -b development --recurse-submodules https://github.com/[your_github_username]/doubtfire-deploy
```
-3. Change directory to doubtfire-deploy by using: cd doubtfire-deploy. Open a Terminal that supports
- sh scripts (on Windows, you will need WSL, Msys2, or Cygwin). Run the following command to set
- your fork as the remote.
+3. Change directory to doubtfire-deploy by using: cd doubtfire-deploy. Open a
+ Terminal that supports sh scripts (on Windows, you will need WSL, Msys2, or
+ Cygwin). Run the following command to set your fork as the remote.
```console
bash ./change_remotes.sh
```
-4. Change into the development directory and use Docker Compose to setup the database.
+4. Change into the development directory and use Docker Compose to setup the
+ database.
```console
cd development
@@ -37,8 +39,8 @@ title: Docker Setup Tutorial
environment:set RAILS_ENV=development && bundle exec rake db:populate"
```
-5. Change into the development directory and use Docker Compose to setup the database. Run in the
- development folder
+5. Change into the development directory and use Docker Compose to setup the
+ database. Run in the development folder
```console
docker compose up -d
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Deploy OnTrack/setting-up-doubtfire.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Deploy OnTrack/setting-up-doubtfire.md
index acf0b2be2..fce1a4b8c 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Deploy OnTrack/setting-up-doubtfire.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Deploy OnTrack/setting-up-doubtfire.md
@@ -4,8 +4,9 @@ title: Setting up Doubtfire
## 1. Before setting up Doubtfire
-You need to set up a Kali Linux virtual machine and clone the Doubtfire. Any management tool(Virtual
-box, VM ware and so on) to manage these virtual machines will work.
+You need to set up a Kali Linux virtual machine and clone the Doubtfire. Any
+management tool(Virtual box, VM ware and so on) to manage these virtual machines
+will work.
Here is a nice YouTube tutorial:
@@ -18,6 +19,9 @@ Here is the GitHub link to the project that we going to exploit:
What you guys need to do is
-1. read this:
-2. And watch this video of MANJIANG YU:
-3. Install docker-compose. Command: sudo apt install docker-compose Success deploy
+1. read this:
+
+2. And watch this video of MANJIANG YU:
+
+3. Install docker-compose. Command: sudo apt install docker-compose Success
+ deploy
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Deploy OnTrack/troubleshooting-docker-backup-for-ontrack.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Deploy OnTrack/troubleshooting-docker-backup-for-ontrack.md
index 697ac4914..8124cfb37 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Deploy OnTrack/troubleshooting-docker-backup-for-ontrack.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Deploy OnTrack/troubleshooting-docker-backup-for-ontrack.md
@@ -4,8 +4,9 @@ title: Guide For Environment setting of Doubtfire
## 1. Before setting up Doubtfire
-You need to set up a Kali Linux virtual machine and clone the Doubtfire. Any management tool(Virtual
-box, VM ware and so on) to manage these virtual machines will work.
+You need to set up a Kali Linux virtual machine and clone the Doubtfire. Any
+management tool(Virtual box, VM ware and so on) to manage these virtual machines
+will work.
Here is a nice YouTube tutorial:
@@ -18,9 +19,12 @@ Here is the GitHub link to the project that we going to exploit:
What you guys need to do is
-1. read this:
-2. And watch this video of MANJIANG YU:
-3. Install docker-compose. Command: sudo apt install docker-compose Success deploy:
+1. read this:
+
+2. And watch this video of MANJIANG YU:
+
+3. Install docker-compose. Command: sudo apt install docker-compose Success
+ deploy:
// photo
@@ -28,7 +32,8 @@ What you guys need to do is
1. docker: 'compose' is not a docker command. See 'docker --help'
- > You need to change “docker compose” of file run-full.sh in doubtfire-deploy/development
+ > You need to change “docker compose” of file run-full.sh in
+ > doubtfire-deploy/development
2. doubtfire-web doesn’t compile successfully:
- Open terminal
@@ -54,6 +59,7 @@ What you guys need to do is
## 4. Give Up
-Still cannot deploy it? Maybe it’s time to give up, you can just use Burp Suite and pentest online
-on my VPS: **IMPORTANT**: don’t scan with BurpSuite you guys won’t find
-anything anyway because of the anchor tag.
+Still cannot deploy it? Maybe it’s time to give up, you can just use Burp Suite
+and pentest online on my VPS: **IMPORTANT**: don’t
+scan with BurpSuite you guys won’t find anything anyway because of the anchor
+tag.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Framework/angular-and-angularjs.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Framework/angular-and-angularjs.md
index 39062c856..04a4a5501 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Framework/angular-and-angularjs.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Framework/angular-and-angularjs.md
@@ -4,52 +4,56 @@ title: Different between Angular and AngularJS
## Definition
-**AngularJS** is an open-source, JavaScript-based, front-end web application framework for dynamic
-web app development. It utilizes HTML as a template language. By extending HTML attributes with
-directives and binding data to HTML with expressions, AngularJS creates an environment that is
-readable, extraordinarily expressive, and quick to develop.
+**AngularJS** is an open-source, JavaScript-based, front-end web application
+framework for dynamic web app development. It utilizes HTML as a template
+language. By extending HTML attributes with directives and binding data to HTML
+with expressions, AngularJS creates an environment that is readable,
+extraordinarily expressive, and quick to develop.
-**Angular 2** is the blanket term used to refer to Angular 2, Angular 4 and all other versions that
-come after AngularJS. Both Angular 2 and 4 are open-source, TypeScript-based front-end web
-application platforms.
+**Angular 2** is the blanket term used to refer to Angular 2, Angular 4 and all
+other versions that come after AngularJS. Both Angular 2 and 4 are open-source,
+TypeScript-based front-end web application platforms.
-**Angular 4** is the latest version of Angular. Although Angular 2 was a complete rewrite of
-AngularJS, there are no major differences between Angular 2 and Angular 4. Angular 4 is only an
-improvement and is backward compatible with Angular 2.
+**Angular 4** is the latest version of Angular. Although Angular 2 was a
+complete rewrite of AngularJS, there are no major differences between Angular 2
+and Angular 4. Angular 4 is only an improvement and is backward compatible with
+Angular 2.
## Angular JS vs Angular
-Angular uses TypeScript and has components as its main building blocks. It is component-based,
-whereas AngularJS uses directives.
+Angular uses TypeScript and has components as its main building blocks. It is
+component-based, whereas AngularJS uses directives.
-Angular's operation employs a hierarchy of components, while AngularJS has directives that allow
-code reusability. So, The AngularJS framework provides reusable components for its users.
+Angular's operation employs a hierarchy of components, while AngularJS has
+directives that allow code reusability. So, The AngularJS framework provides
+reusable components for its users.
**Why Angular?**
- It has a mobile support framework.
-- The latest Angular version supports TypeScript and enables code optimization and modularity by
- employing the OOPS concept.
+- The latest Angular version supports TypeScript and enables code optimization
+ and modularity by employing the OOPS concept.
- It supports the changes for an increased hierarchical dependencies system.
-- A developer can use various features such as syntax for type checking, Dart, TypeScript, ES5,
- iterators, Angular CLI, ES6, and lambda operators.
+- A developer can use various features such as syntax for type checking, Dart,
+ TypeScript, ES5, iterators, Angular CLI, ES6, and lambda operators.
- Angular opts for semantic versioning that has a major-minor-patch arrangement.
- Amongst its best benefits is its provision for the event of simplest routing.
- Are you new to Angular? Check out the Angular tutorial here.
**Why AngularJS?**
-- It's secure MVC (Model-View-Controller) data binding makes application performance dynamic.
+- It's secure MVC (Model-View-Controller) data binding makes application
+ performance dynamic.
- A developer can easily perform unit testing or change detection at any point.
-- It provides several helpful features for web developers, like declarative template language with
- HTML to allow them to make it more intuitive.
-- The open-source framework allows well-structured front-end development. It doesn't require any
- plugin or other platforms to work.
+- It provides several helpful features for web developers, like declarative
+ template language with HTML to allow them to make it more intuitive.
+- The open-source framework allows well-structured front-end development. It
+ doesn't require any plugin or other platforms to work.
- The AngularJS application runs on Android and iOS phones and tablets.
## Reference
-Simplilearn.com. (2018). AngularJS Vs. Angular 2 Vs. Angular 4: Understanding the Differences.
-[online] Available at:
- [Accessed 20
-Sep. 2022].
+Simplilearn.com. (2018). AngularJS Vs. Angular 2 Vs. Angular 4: Understanding
+the Differences. [online] Available at:
+
+[Accessed 20 Sep. 2022].
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/INBOX_FULL_MIGRATION_PLAN.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/INBOX_FULL_MIGRATION_PLAN.md
index 12b802e08..604294cf5 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/INBOX_FULL_MIGRATION_PLAN.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/INBOX_FULL_MIGRATION_PLAN.md
@@ -1,6 +1,7 @@
---
title: Full Angular Migration Plan - Inbox Component
-description: Complete migration plan for inbox component including parent dependency chain
+description:
+ Complete migration plan for inbox component including parent dependency chain
---
# Full Angular Migration Plan: Inbox Component
@@ -27,8 +28,8 @@ units/tasks/inbox/inbox.coffee (Child)
Uses: unit, unitRole, taskData
```
-**The problem:** Inbox component needs data from two AngularJS parent states. Can't fully migrate
-inbox until parents are migrated.
+**The problem:** Inbox component needs data from two AngularJS parent states.
+Can't fully migrate inbox until parents are migrated.
---
@@ -58,10 +59,11 @@ inbox until parents are migrated.
### PR 1: Migrate units/index (START HERE)
-**Status:** Already started - https://github.com/thoth-tech/doubtfire-web/pull/435
+**Status:** Already started -
+https://github.com/thoth-tech/doubtfire-web/pull/435
-Migrate the root parent first. Replace AngularJS state with Angular resolvers that provide unit and
-unitRole data.
+Migrate the root parent first. Replace AngularJS state with Angular resolvers
+that provide unit and unitRole data.
**Files to change:**
@@ -136,5 +138,6 @@ Each PR needs to verify:
## Key Principle
-**Bottom-up migration:** Start at the root (units/index), work down to the child (inbox). Each PR is
-independently testable and valuable even if later ones are delayed.
+**Bottom-up migration:** Start at the root (units/index), work down to the child
+(inbox). Each PR is independently testable and valuable even if later ones are
+delayed.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/INBOX_MIGRATION_INVESTIGATION.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/INBOX_MIGRATION_INVESTIGATION.md
index 7f9a9e3b7..5653e89fe 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/INBOX_MIGRATION_INVESTIGATION.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/INBOX_MIGRATION_INVESTIGATION.md
@@ -1,6 +1,8 @@
---
title: Inbox Migration Investigation Report
-description: Investigation report for the inbox component migration from AngularJS to Angular
+description:
+ Investigation report for the inbox component migration from AngularJS to
+ Angular
---
# Inbox Migration Investigation Report
@@ -13,8 +15,9 @@ description: Investigation report for the inbox component migration from Angular
## What I Found
-The inbox component has already been migrated to Angular (`InboxComponent`), but it can't work
-standalone because it depends on data from two AngularJS parent states.
+The inbox component has already been migrated to Angular (`InboxComponent`), but
+it can't work standalone because it depends on data from two AngularJS parent
+states.
**Component dependency diagram:**
@@ -27,7 +30,8 @@ inbox (Angular component - already exists)
↓ needs all the above data
```
-The Angular component exists, but the parents feeding it data are still AngularJS.
+The Angular component exists, but the parents feeding it data are still
+AngularJS.
---
@@ -63,15 +67,15 @@ Without the parent data, the component has nothing to display.
### Approach 1: Keep Parents (Partial Migration)
-**What:** Keep AngularJS parents, only migrate inbox routing **Pros:** Quick, minimal changes
-**Cons:** Still dependent on AngularJS, need to revisit later **Files to remove:** Just
-`inbox.coffee` and `.tpl.html`
+**What:** Keep AngularJS parents, only migrate inbox routing **Pros:** Quick,
+minimal changes **Cons:** Still dependent on AngularJS, need to revisit later
+**Files to remove:** Just `inbox.coffee` and `.tpl.html`
### Approach 2: Migrate Everything (Complete Migration)
-**What:** Migrate both parent states first, then inbox **Pros:** Clean, fully Angular, no AngularJS
-dependencies **Cons:** More work, needs parent migration first **Files to remove:** All inbox
-AngularJS files after parents done
+**What:** Migrate both parent states first, then inbox **Pros:** Clean, fully
+Angular, no AngularJS dependencies **Cons:** More work, needs parent migration
+first **Files to remove:** All inbox AngularJS files after parents done
---
@@ -79,8 +83,9 @@ AngularJS files after parents done
**Approach 2** is the right solution. Here's why:
-Both parent states (`units/index` and `units/tasks`) need migration anyway. If we do partial
-migration now, we'll have to come back and redo work later when parents are migrated.
+Both parent states (`units/index` and `units/tasks`) need migration anyway. If
+we do partial migration now, we'll have to come back and redo work later when
+parents are migrated.
**Migration order:**
@@ -111,7 +116,9 @@ This is detailed in the full migration plan document.
## Key Insight
-The inbox isn't really blocked by its own complexity - it's blocked by parent dependencies. Once
-parents are migrated to Angular, finishing inbox is just cleanup (delete old files).
+The inbox isn't really blocked by its own complexity - it's blocked by parent
+dependencies. Once parents are migrated to Angular, finishing inbox is just
+cleanup (delete old files).
-The real work is in the parent migrations, which is covered in the full migration plan.
+The real work is in the parent migrations, which is covered in the full
+migration plan.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/create-branch-and-initial-migration.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/create-branch-and-initial-migration.md
index 3d1a97d83..5364f8dc7 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/create-branch-and-initial-migration.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/create-branch-and-initial-migration.md
@@ -61,13 +61,14 @@ For the Task Description Card we had the files:
- not-found.component.html
- not-found.component.scss
-Notice the naming convention. When migrating a component we use the format name.component.extension.
-Add the start of the TypeScript using something based on the following:
+Notice the naming convention. When migrating a component we use the format
+name.component.extension. Add the start of the TypeScript using something based
+on the following:

-We can’t see any of these changes yet, but it is a good clean start so let’s commit this before we
-move on.
+We can’t see any of these changes yet, but it is a good clean start so let’s
+commit this before we move on.
```console
git add .
@@ -75,9 +76,9 @@ git commit -m "build: create initial files for migration”
git push --set-upstream origin touth/migrate/not-found
```
-Then we should make sure to push this back to GitHub so others can see our progress. As this is a
-new branch you will need to set the upstram branch, but if you forget the `git push` will remind you
-anyway.
+Then we should make sure to push this back to GitHub so others can see our
+progress. As this is a new branch you will need to set the upstram branch, but
+if you forget the `git push` will remind you anyway.

@@ -95,26 +96,29 @@ In the ./src/app you should see

-Because we want to migrate AngularJS to Angular, therefore we need to unlink the module from
-AngularJS and link to Angular.
+Because we want to migrate AngularJS to Angular, therefore we need to unlink the
+module from AngularJS and link to Angular.
-1. Delete the import from related module from (doubtfire-web/src/app.doubtfire-angularjs.module.ts)
+1. Delete the import from related module from
+ (doubtfire-web/src/app.doubtfire-angularjs.module.ts)
- 
2. Import the newly created TypeScript component
- 
-3. Downgrade the TypeScript Component from (doubtfire-web/src/app.doubtfire-angularjs.module.ts)
+3. Downgrade the TypeScript Component from
+ (doubtfire-web/src/app.doubtfire-angularjs.module.ts)
- 
4. Import the new Component to Angular
- 
5. Add to the Ng Module
- 
-6. Delete module injection if neccessary (parent_folder_name/parent_folder_name.coffee)
+6. Delete module injection if neccessary
+ (parent_folder_name/parent_folder_name.coffee)
- 
---
## **Congratulations**
-It is **DONE** for the initial migration. At this stage, you will need to upskill yourself about
-TypeScript, Angular and AngularJS and working in the code base and read the document about Regular
-Migration. Good Luck!
+It is **DONE** for the initial migration. At this stage, you will need to
+upskill yourself about TypeScript, Angular and AngularJS and working in the code
+base and read the document about Regular Migration. Good Luck!
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/regular-migration-step.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/regular-migration-step.md
index 2e893379e..d5003e2cc 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/regular-migration-step.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/regular-migration-step.md
@@ -1,12 +1,14 @@
---
-title: You should do this after Create the Branch and Finish the Initial Migration
+title:
+ You should do this after Create the Branch and Finish the Initial Migration
---
> Trimester 2 2022 – SIT374
## Ensure you have your author credentials set up
-You should ensure your git user config details are set to the email address you use with GitHub:
+You should ensure your git user config details are set to the email address you
+use with GitHub:
```shell
git config --global user.email "my-github-email@gmail.com"
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/task-dashboard-investigation.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/task-dashboard-investigation.md
index 36b0954f9..a806e106e 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/task-dashboard-investigation.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/task-dashboard-investigation.md
@@ -1,7 +1,8 @@
---
title: Task-Dashboard Migration Investigation
description:
- Investigation report for completing the task-dashboard migration from AngularJS to Angular
+ Investigation report for completing the task-dashboard migration from
+ AngularJS to Angular
---
# Task-Dashboard Migration Investigation
@@ -14,9 +15,10 @@ description:
## What I Found
-The task-dashboard migration is about 80% done. The Angular components exist and work (I can see
-them running in `inbox.component.html`), but the main dashboard view still loads the old AngularJS
-directives. We have two parallel systems running.
+The task-dashboard migration is about 80% done. The Angular components exist and
+work (I can see them running in `inbox.component.html`), but the main dashboard
+view still loads the old AngularJS directives. We have two parallel systems
+running.
---
@@ -32,7 +34,8 @@ I went through the code and found these 7 card components that got migrated:
6. TaskStatusCardComponent
7. TaskAssessmentCardComponent
-All work fine - just need to finish switching the main dashboard over to use them.
+All work fine - just need to finish switching the main dashboard over to use
+them.
---
@@ -64,9 +67,10 @@ All work fine - just need to finish switching the main dashboard over to use the
## Why It's Tricky
-These changes need to happen together. If I update the template but not the state registration,
-routing breaks. If I delete files before updating the template, the dashboard stops loading. Can't
-split this into many small PRs - it's one atomic change.
+These changes need to happen together. If I update the template but not the
+state registration, routing breaks. If I delete files before updating the
+template, the dashboard stops loading. Can't split this into many small PRs -
+it's one atomic change.
---
@@ -114,11 +118,12 @@ Step 4: Delete old files
## Risk Level: Low
-The Angular components already work. This is just switching which template system renders them. If
-something breaks, we can revert quickly since the old files are still in git history.
+The Angular components already work. This is just switching which template
+system renders them. If something breaks, we can revert quickly since the old
+files are still in git history.
-Main risk: making sure no other files reference `` besides the main dashboard
-template.
+Main risk: making sure no other files reference `` besides the
+main dashboard template.
---
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/task-dashboard-pr-plan.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/task-dashboard-pr-plan.md
index cf623987d..02ec49627 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/task-dashboard-pr-plan.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Migration/task-dashboard-pr-plan.md
@@ -12,8 +12,8 @@ description: Plan for completing the task-dashboard migration
## Approach: 2 PRs
-After analyzing the code dependencies, I'm proposing 2 PRs instead of splitting this into many small
-ones.
+After analyzing the code dependencies, I'm proposing 2 PRs instead of splitting
+this into many small ones.
---
@@ -85,8 +85,8 @@ Change 4: Delete old files
✓ Complete migration
```
-**These are interdependent, not independent tasks.** Splitting them creates broken intermediate
-states that don't work or make sense.
+**These are interdependent, not independent tasks.** Splitting them creates
+broken intermediate states that don't work or make sense.
---
@@ -108,7 +108,8 @@ Target State:
Old files deleted ✓
```
-**Risk if split:** Each intermediate step leaves the codebase in a half-migrated state.
+**Risk if split:** Each intermediate step leaves the codebase in a half-migrated
+state.
---
@@ -139,14 +140,16 @@ Target State:
## Rollback Strategy
-If issues arise, revert the single PR. Old files remain in git history and can be restored quickly.
+If issues arise, revert the single PR. Old files remain in git history and can
+be restored quickly.
-**Why atomic changes matter:** One revert fixes everything vs. figuring out which of 5 PRs to
-revert.
+**Why atomic changes matter:** One revert fixes everything vs. figuring out
+which of 5 PRs to revert.
---
## Key Insight for Next Cohort
-Search the codebase for `` before starting - make sure dashboard.tpl.html is the
-only place using the old tag. Update module imports before deleting files to avoid build failures.
+Search the codebase for `` before starting - make sure
+dashboard.tpl.html is the only place using the old tag. Update module imports
+before deleting files to avoid build failures.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Research & Findings/spike-outcome-data-analytics.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Research & Findings/spike-outcome-data-analytics.md
index 83a13a9a8..40aeb0d0c 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Research & Findings/spike-outcome-data-analytics.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Research & Findings/spike-outcome-data-analytics.md
@@ -12,7 +12,8 @@ title: Spike Outcomes
## Goals / Deliverables
-Summarise from the spike plan goal*Besides this report, what else was created ie UML, code, reports*
+Summarise from the spike plan goal*Besides this report, what else was created ie
+UML, code, reports*
- Data Analytics backlog on Trello
@@ -31,16 +32,19 @@ List key tasks likely to help another developer
## What we found out
-The Data Analytics feature as it currently exists in OnTrack provides a limited set of
-visualisations given the data available. The current visualisations are: Target Grade Pie Chart,
-Task Status Pie Chart & Task Completion Box Plot. For the future development of Data Analytics,
-although integrated visualisations will continue to play a key role in providing a quick overview of
-the data, the ability to export the data to a third party tool such as Tableau or PowerBI will be a
-key feature. This will allow for more complex visualisations to be created and for the data to be
-analysed in more depth. The current visualisations will need to be migrated to the latest version of
-Angular and maintained as they provide a quick overview of the data. One particular area where the
-current visualisations are lacking is the ability to track interaction time such as tutor time to
-provide task feedback, this is a key visualisation which could be added to the existing
-visualisations. In parallel to extending the current visualisations, the requirements of the export
-feature will be investigated including a questionnaire to understand the needs of Unit chairs and
-tutors, before designing UML diagrams and implementing the feature.
+The Data Analytics feature as it currently exists in OnTrack provides a limited
+set of visualisations given the data available. The current visualisations are:
+Target Grade Pie Chart, Task Status Pie Chart & Task Completion Box Plot. For
+the future development of Data Analytics, although integrated visualisations
+will continue to play a key role in providing a quick overview of the data, the
+ability to export the data to a third party tool such as Tableau or PowerBI will
+be a key feature. This will allow for more complex visualisations to be created
+and for the data to be analysed in more depth. The current visualisations will
+need to be migrated to the latest version of Angular and maintained as they
+provide a quick overview of the data. One particular area where the current
+visualisations are lacking is the ability to track interaction time such as
+tutor time to provide task feedback, this is a key visualisation which could be
+added to the existing visualisations. In parallel to extending the current
+visualisations, the requirements of the export feature will be investigated
+including a questionnaire to understand the needs of Unit chairs and tutors,
+before designing UML diagrams and implementing the feature.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Research & Findings/testing-decision.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Research & Findings/testing-decision.md
index 2c9a053cf..aa12f523f 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Research & Findings/testing-decision.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Research & Findings/testing-decision.md
@@ -2,11 +2,12 @@
title: What is the current state of testing?
---
-In this issue, we will discuss the current state of testing and what tools are we proposing. While
-there are some existing unit tests for components written in TypeScript, there are no existing unit
-tests for the CoffeeScript components. Existing tests use Karma test runner integrated with Jasmine.
-Currently, the ng test command (Runs unit tests in a project) and npm install are not working due to
-dependencies issues, which are getting fixed.
+In this issue, we will discuss the current state of testing and what tools are
+we proposing. While there are some existing unit tests for components written in
+TypeScript, there are no existing unit tests for the CoffeeScript components.
+Existing tests use Karma test runner integrated with Jasmine. Currently, the ng
+test command (Runs unit tests in a project) and npm install are not working due
+to dependencies issues, which are getting fixed.
## Comparing different alternatives
@@ -14,88 +15,99 @@ dependencies issues, which are getting fixed.
## Pros
--Selenium involves implementing browser drivers for the script to communicate with the web elements
-on the page. -Cypress is used by both developers and QA engineers -In Cypress, there is no
-additional IDE overhead. When you launch Cypress, it asks you to select an IDE to modify the test
-script. -The Cypress framework produces more accurate results. It’s because Cypress has greater
-control over the entire automation process. -Cypress instances respond in real-time to application
-events and commands. -Cypress does not utilize WebDriver for testing or doesn't send the command to
-the browser using a specific driver. If your language can someway be transpiled to JS, it can use
-DOM events to send the click command to the button or such. This results in a much faster execution
-of test results
+-Selenium involves implementing browser drivers for the script to communicate
+with the web elements on the page. -Cypress is used by both developers and QA
+engineers -In Cypress, there is no additional IDE overhead. When you launch
+Cypress, it asks you to select an IDE to modify the test script. -The Cypress
+framework produces more accurate results. It’s because Cypress has greater
+control over the entire automation process. -Cypress instances respond in
+real-time to application events and commands. -Cypress does not utilize
+WebDriver for testing or doesn't send the command to the browser using a
+specific driver. If your language can someway be transpiled to JS, it can use
+DOM events to send the click command to the button or such. This results in a
+much faster execution of test results
## Cons
--Cypress is a test runner mainly focused on end-to-end tests. For unit testing, there are better,
-faster alternatives. -Cypress is currently only supported for the Chrome, Firefox, Edge, Brave, and
-Electron browsers. -As a result, Cypress is a less favoured option for cross-browser testing. -For
-the building of test cases, it only supports the JavaScript framework. -Cypress doesn’t support
+-Cypress is a test runner mainly focused on end-to-end tests. For unit testing,
+there are better, faster alternatives. -Cypress is currently only supported for
+the Chrome, Firefox, Edge, Brave, and Electron browsers. -As a result, Cypress
+is a less favoured option for cross-browser testing. -For the building of test
+cases, it only supports the JavaScript framework. -Cypress doesn’t support
remote execution.
## Selenium
### \*Pros
--Selenium is an Open Source Software. -Selenium supports various programming languages to write
-programs (Test scripts) -Selenium supports various operating systems (MS Windows, Linux, Macintosh
-etc...) -Selenium supports various Browsers (Mozilla Firefox, Google Chrome, IE, Opera, Safari
-etc...) -Selenium supports Parallel Test Execution. -Selenium uses fewer Hardware resources. -Good
+-Selenium is an Open Source Software. -Selenium supports various programming
+languages to write programs (Test scripts) -Selenium supports various operating
+systems (MS Windows, Linux, Macintosh etc...) -Selenium supports various
+Browsers (Mozilla Firefox, Google Chrome, IE, Opera, Safari etc...) -Selenium
+supports Parallel Test Execution. -Selenium uses fewer Hardware resources. -Good
choice for ongoing regression testing and end to end testing.
### \*Cons
--No reliable Technical Support from anybody. -It supports Web-based applications only. -Difficult to
-use, and takes more time to create Test cases. Takes more time to learn. -Difficult to set up Test
-Environment when it compares to Vendor Tools like UFT, RFT, SilkTest etc... -Limited support for
-Image Testing. -No Built-in Reporting facility. -Slow -Better choice for end-to-end testing than
+-No reliable Technical Support from anybody. -It supports Web-based applications
+only. -Difficult to use, and takes more time to create Test cases. Takes more
+time to learn. -Difficult to set up Test Environment when it compares to Vendor
+Tools like UFT, RFT, SilkTest etc... -Limited support for Image Testing. -No
+Built-in Reporting facility. -Slow -Better choice for end-to-end testing than
unit testing.
## Jest
### -Pros
--The biggest advantage of using Jest is minimal setup or configuration. -It comes with an assertion
-library and mocking support -The tests are written in BDD style -You can put your tests inside of a
-directory called tests or name them with a .spec.js or .test.js - extension, then run jest and it
-works -Jest also supports snapshot testing
+-The biggest advantage of using Jest is minimal setup or configuration. -It
+comes with an assertion library and mocking support -The tests are written in
+BDD style -You can put your tests inside of a directory called tests or name
+them with a .spec.js or .test.js - extension, then run jest and it works -Jest
+also supports snapshot testing
### -Cons
--Jest’s biggest weaknesses stem from being newer and less widely used among JavaScript developers.
--It has less tooling and library support available compared to more mature libraries (like Mocha).
--WebStorm didn’t even support running Jest tests. -Due to its young age, it may also be more
-difficult to use Jest across the board for larger projects that utilize different types of testing.
--Slower due to auto mocking -Poor documentation
+-Jest’s biggest weaknesses stem from being newer and less widely used among
+JavaScript developers. -It has less tooling and library support available
+compared to more mature libraries (like Mocha). -WebStorm didn’t even support
+running Jest tests. -Due to its young age, it may also be more difficult to use
+Jest across the board for larger projects that utilize different types of
+testing. -Slower due to auto mocking -Poor documentation
## Karma + Jasmine
### `Pros
--When creating Angular projects using the Angular CLI, Jasmine and Karma are used to create and run
-unit tests by default. -Karma is a test runner built by the angularJS to make TDD easy in Angular
-Project Testing. -Karma is a JavaScript test runner that fits the needs of an AngularJS developer.
--Jasmine is compatible with almost every framework or library of your choice - The Jasmine BDD
-library makes it easy to define tests, run them, and integrate them -Jasmine does not rely on any
-JavaScript framework, DOM, or browsers. -We can run Jasmine tests in a browser ourselves by setting
-up and loading an HTML file, but more commonly we use a command-line tool called Karma. -Karma
-handles the process of creating HTML files, opening browsers and running tests and returning the
-results of those tests to the command line. -If you use the Angular CLI to manage projects it
-automatically creates stub Jasmine spec files for you when generating code. -It also handles the
-Karma configuration, transpilation and bundling of your files -It offers clean and polished syntax.
--Jasmine provides a rich set of built-in matchers that can match expectations and add asserts to the
+-When creating Angular projects using the Angular CLI, Jasmine and Karma are
+used to create and run unit tests by default. -Karma is a test runner built by
+the angularJS to make TDD easy in Angular Project Testing. -Karma is a
+JavaScript test runner that fits the needs of an AngularJS developer. -Jasmine
+is compatible with almost every framework or library of your choice - The
+Jasmine BDD library makes it easy to define tests, run them, and integrate them
+-Jasmine does not rely on any JavaScript framework, DOM, or browsers. -We can
+run Jasmine tests in a browser ourselves by setting up and loading an HTML file,
+but more commonly we use a command-line tool called Karma. -Karma handles the
+process of creating HTML files, opening browsers and running tests and returning
+the results of those tests to the command line. -If you use the Angular CLI to
+manage projects it automatically creates stub Jasmine spec files for you when
+generating code. -It also handles the Karma configuration, transpilation and
+bundling of your files -It offers clean and polished syntax. -Jasmine provides a
+rich set of built-in matchers that can match expectations and add asserts to the
test cases
## `Cons
--Asynchronous testing can be a bit of a headache -js is required for running Karma -Expects a
-specific suffix to all test files (\*spec.js by default)
+-Asynchronous testing can be a bit of a headache -js is required for running
+Karma -Expects a specific suffix to all test files (\*spec.js by default)
## Proposal
-Karma handles the process of creating HTML files, opening browsers and running tests and returning
-the results of those tests to the command line. When using Angular CLI to manage projects it
-automatically creates stub Jasmine spec files for you when generating code. On top of that, it is
-already implemented in our code base. It goes well with our TDD approach. It will also be efficient
-for unit testing and requires minimal configurations so that we can spend more time on coding and
-quality testing itself. Additionally, there is also good online documentation and resources
-available for training.
+Karma handles the process of creating HTML files, opening browsers and running
+tests and returning the results of those tests to the command line. When using
+Angular CLI to manage projects it automatically creates stub Jasmine spec files
+for you when generating code. On top of that, it is already implemented in our
+code base. It goes well with our TDD approach. It will also be efficient for
+unit testing and requires minimal configurations so that we can spend more time
+on coding and quality testing itself. Additionally, there is also good online
+documentation and resources available for training.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Testing/unit-test.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Testing/unit-test.md
index a44f29af4..bf19a5a9e 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Testing/unit-test.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/Testing/unit-test.md
@@ -29,7 +29,8 @@ describe('NotFoundComponent', () => {
});
```
-_Replace the (NotFoundComponent) component name with your choosen compoent name._
+_Replace the (NotFoundComponent) component name with your choosen compoent
+name._
```shell
npm install
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/component-review-create-unit-modal.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/component-review-create-unit-modal.md
index 50e10e4e8..bf5fdac8b 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/component-review-create-unit-modal.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/component-review-create-unit-modal.md
@@ -26,8 +26,8 @@ It creates a new unit.

-Currently, the modal had 2 text input fields unit code and unit name. In the new modal a 3rd dorp
-down field is to be added, teaching period.
+Currently, the modal had 2 text input fields unit code and unit name. In the new
+modal a 3rd dorp down field is to be added, teaching period.
So, in the updated modal the user provides the following:
@@ -35,20 +35,22 @@ So, in the updated modal the user provides the following:
2. Unit Name
3. Teaching period
-New design sketch: Existing UI components are to be used for the input fields and button etc.
+New design sketch: Existing UI components are to be used for the input fields
+and button etc.

Link to figma:
[here]()
-**Component migration Check list** – What is needs to be checked for this component to work once
-migrated?
+**Component migration Check list** – What is needs to be checked for this
+component to work once migrated?
[ ] ability to collect details from the user
[ ] succeeds when data is valid
-[ ] handles errors - duplicate unit code in the teaching period, or invalid dates
+[ ] handles errors - duplicate unit code in the teaching period, or invalid
+dates
[ ] created unit is shown on success
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/local-storage.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/local-storage.md
index 1abd3c74e..5ff61fc92 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/local-storage.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/local-storage.md
@@ -12,7 +12,8 @@ First select a component to review from the list below:
## Component Name
-local-storage.coffee - doubtfire-web/src/app/config/local-storage/local-storage.coffee
+local-storage.coffee -
+doubtfire-web/src/app/config/local-storage/local-storage.coffee
Relevant files:
@@ -20,27 +21,32 @@ Relevant files:
## Component purpose
-this component local-storage.coffee is used to configures 'localstorage' usage in the application
+this component local-storage.coffee is used to configures 'localstorage' usage
+in the application

## Component outcomes/interactions
-Basically, this component is used to configure the "localstorage" usage, which is used to set up the
-prefix on the key-value pair that gets stored in the web browser's local storage. for example: (user
-id, email), (login time), and (credentials token).
+Basically, this component is used to configure the "localstorage" usage, which
+is used to set up the prefix on the key-value pair that gets stored in the web
+browser's local storage. for example: (user id, email), (login time), and
+(credentials token).
## Component migration plan
-As this is a non visual componet which just configures local-storage, so for that i will be creating
-a new Typescript file local-storage.component.ts and remove the old local-storage.coffee file.
+As this is a non visual componet which just configures local-storage, so for
+that i will be creating a new Typescript file local-storage.component.ts and
+remove the old local-storage.coffee file.
-**Component review checklist** – What is needs to be checked for this component to work once
-migrated?
+**Component review checklist** – What is needs to be checked for this component
+to work once migrated?
-once migrated we need to check whether the code compiles without any errors or warnings.
+once migrated we need to check whether the code compiles without any errors or
+warnings.
## Discussion with Client (Andrew Cain)
-See if the component is still needed and present this document so Andrew can review if all the
-outcomes and interactions are correct prior to the migration and build of this component.
+See if the component is still needed and present this document so Andrew can
+review if all the outcomes and interactions are correct prior to the migration
+and build of this component.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/on-long-press.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/on-long-press.md
index 4d4b0af93..deacbe553 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/on-long-press.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/on-long-press.md
@@ -22,29 +22,34 @@ Relevant files:
## Component purpose
-this component on-long-press can detect when a user touches and holds a button for a certain amount
-of time (600 milliseconds by default). When this happens it can trigger certain action.this
-functionality can be added to any element as an attribute.
+this component on-long-press can detect when a user touches and holds a button
+for a certain amount of time (600 milliseconds by default). When this happens it
+can trigger certain action.this functionality can be added to any element as an
+attribute.
## Component outcomes/interactions
-Basically, this component is used to trigger a special action that can be defined for any element
-such as a button. This is useful for touch-based 、interfaces for example on mobile devices, where
-holding down on an element can perform a specific action.
+Basically, this component is used to trigger a special action that can be
+defined for any element such as a button. This is useful for
+touch-based 、interfaces for example on mobile devices, where holding down on an
+element can perform a specific action.
## Component migration plan
-As this is a non visual componet that has a functionality to detect long presses which can be added
-to any element as an attribute So I will be converting the old coffee file into .ts file and an html
-file to create a button that uses the onLongPress directive to trigger a long press event.
+As this is a non visual componet that has a functionality to detect long presses
+which can be added to any element as an attribute So I will be converting the
+old coffee file into .ts file and an html file to create a button that uses the
+onLongPress directive to trigger a long press event.
## Component review checklist
What is needs to be checked for this component 、to work once migrated?
-once migrated we need to check whether the code compiles without any errors or warnings.
+once migrated we need to check whether the code compiles without any errors or
+warnings.
## Discussion with Client (Andrew Cain)
-See if the component is still needed and present this document so Andrew can review if all the
-outcomes and interactions are correct prior to the migration and build of this component.
+See if the component is still needed and present this document so Andrew can
+review if all the outcomes and interactions are correct prior to the migration
+and build of this component.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/teching-period-breaks.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/teching-period-breaks.md
index 0d861b301..630dd17bc 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/teching-period-breaks.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/teching-period-breaks.md
@@ -17,30 +17,33 @@ Relevant files:
## Component Purpose
-The purpose of the component is to display the breaks that have been registered against a teaching
-period. It also allows the user to sort the list by different criteria.
+The purpose of the component is to display the breaks that have been registered
+against a teaching period. It also allows the user to sort the list by different
+criteria.

## Component Outcomes and Interactions
-The expected outcome of the component is to provide a user-friendly interface for managing breaks
-registered against a teaching period, allowing the user to quickly find and view information about
-specific breaks.
+The expected outcome of the component is to provide a user-friendly interface
+for managing breaks registered against a teaching period, allowing the user to
+quickly find and view information about specific breaks.
-Interaction occurs with the user through filtering and pagination controls. A button is clickable
-which invokes the `CreateBreakModal` (which is out of scope for this review).
+Interaction occurs with the user through filtering and pagination controls. A
+button is clickable which invokes the `CreateBreakModal` (which is out of scope
+for this review).
-The component takes in a `teachingPeriod` object where its properties are used to display
-information in the user interface.
+The component takes in a `teachingPeriod` object where its properties are used
+to display information in the user interface.
## Component Migration Plan
-The migration plan is to review similar tabular based components that have already been migrated to
-TypeScript and Material UI.
+The migration plan is to review similar tabular based components that have
+already been migrated to TypeScript and Material UI.
-For example, the `unit-students-editor` component. Based on this review, migrate the component in
-such a way that is in line with the previous works to maintain consistency.
+For example, the `unit-students-editor` component. Based on this review, migrate
+the component in such a way that is in line with the previous works to maintain
+consistency.
`unit-students-editor`
@@ -48,7 +51,7 @@ such a way that is in line with the previous works to maintain consistency.
## Component Post-Migration
-The work required to migrate the component is now complete and the migrated component is shown
-below.
+The work required to migrate the component is now complete and the migrated
+component is shown below.

diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/teching-period-details-editor.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/teching-period-details-editor.md
index 50b7a6166..7b0017039 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/teching-period-details-editor.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/teching-period-details-editor.md
@@ -17,25 +17,28 @@ Relevant files:
## Component Purpose
-The purpose of the component is to edit the details for a teaching period. It also allows the user
-to update key properties of a teaching period, such as the name and length.
+The purpose of the component is to edit the details for a teaching period. It
+also allows the user to update key properties of a teaching period, such as the
+name and length.

## Component Outcomes and Interactions
-The expected outcome of the component is to provide a user-friendly interface for updating the key
-properties of a teaching period.
+The expected outcome of the component is to provide a user-friendly interface
+for updating the key properties of a teaching period.
-Interaction occurs with the user through a form which contains a series of text and date inputs.
+Interaction occurs with the user through a form which contains a series of text
+and date inputs.
## Component Migration Plan
-The migration plan is to review similar form based components that have already been migrated to
-TypeScript and Material UI.
+The migration plan is to review similar form based components that have already
+been migrated to TypeScript and Material UI.
-For example, the `edit-profile-form` component. Based on this review, migrate the component in such
-a way that is in line with the previous works to maintain consistency.
+For example, the `edit-profile-form` component. Based on this review, migrate
+the component in such a way that is in line with the previous works to maintain
+consistency.
`edit-profile-form`
@@ -43,7 +46,7 @@ a way that is in line with the previous works to maintain consistency.
## Component Post-Migration
-The work required to migrate the component is now complete and the migrated component is shown
-below.
+The work required to migrate the component is now complete and the migrated
+component is shown below.

diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/teching-period-units.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/teching-period-units.md
index e4513c3e9..9fb902d91 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/teching-period-units.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/UI Enhancement/teching-period-units.md
@@ -17,32 +17,34 @@ Relevant files:
## Component Purpose
-The purpose of the component is to display the units that have been registered against a teaching
-period. It also allows the user to search for specific units, sort the list by different criteria,
-and navigate to a unit detail view.
+The purpose of the component is to display the units that have been registered
+against a teaching period. It also allows the user to search for specific units,
+sort the list by different criteria, and navigate to a unit detail view.

## Component Outcomes and Interactions
-The expected outcome of the component is to provide a user-friendly interface for managing units
-registered against a teaching period, allowing the user to quickly find and view information about
-specific units.
+The expected outcome of the component is to provide a user-friendly interface
+for managing units registered against a teaching period, allowing the user to
+quickly find and view information about specific units.
-Interaction occurs with the user through filtering and pagination controls. Each table row is
-clickable, which links to the unit detail page. A button is clickable which invokes the
-`RolloverTeachingPeriodModal` (which is out of scope for this review).
+Interaction occurs with the user through filtering and pagination controls. Each
+table row is clickable, which links to the unit detail page. A button is
+clickable which invokes the `RolloverTeachingPeriodModal` (which is out of scope
+for this review).
-The component takes in a `teachingPeriod` object where its properties are used to display
-information in the user interface.
+The component takes in a `teachingPeriod` object where its properties are used
+to display information in the user interface.
## Component Migration Plan
-The migration plan is to review similar tabular based components that have already been migrated to
-TypeScript and Material UI.
+The migration plan is to review similar tabular based components that have
+already been migrated to TypeScript and Material UI.
-For example, the `unit-students-editor` component. Based on this review, migrate the component in
-such a way that is in line with the previous works to maintain consistency.
+For example, the `unit-students-editor` component. Based on this review, migrate
+the component in such a way that is in line with the previous works to maintain
+consistency.
`unit-students-editor`
@@ -50,7 +52,7 @@ such a way that is in line with the previous works to maintain consistency.
## Component Post-Migration
-The work required to migrate the component is now complete and the migrated component is shown
-below.
+The work required to migrate the component is now complete and the migrated
+component is shown below.

diff --git a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/introduction.md b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/introduction.md
index 8cf6abe11..7f0127b39 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Front End Migration/introduction.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Front End Migration/introduction.md
@@ -4,35 +4,37 @@ title: Entry point of OnTrack UIEnhancemnt
## T3 / 2022
-OnTrack is now a hybrid project which is using Bootstrap/AngularJS + MUI/Angular, it is build using
-different reusable components.
+OnTrack is now a hybrid project which is using Bootstrap/AngularJS +
+MUI/Angular, it is build using different reusable components.
-AngularJS support has officially ended as of **January 2022**. The code will remain accessible on
-GitHub, npm, Bower, and Release archive. This website will remain here indefinitely. The GitHub
-repository will be in an archived state, meaning that no new issues or pull requests can be
-submitted. CoffeeScript can be viewed as a fine complement to JavaScript, not a replacement.
+AngularJS support has officially ended as of **January 2022**. The code will
+remain accessible on GitHub, npm, Bower, and Release archive. This website will
+remain here indefinitely. The GitHub repository will be in an archived state,
+meaning that no new issues or pull requests can be submitted. CoffeeScript can
+be viewed as a fine complement to JavaScript, not a replacement.
-Therefore, the OnTrack frontend is looking forward to switching to the new MUI/Angular approach and
-trying to keep things more up to date going forward.
+Therefore, the OnTrack frontend is looking forward to switching to the new
+MUI/Angular approach and trying to keep things more up to date going forward.
## Aims for Trimester
---
1. Testing new branch
- - We have a special request from our director Andrew that we need to execute some on testing the
- doubtfire-web(quality/entity-service-to-npm) branch with
- doubtfire-api(refactor/entity-service-backend), write up some test scripts for people to run to
- verify that things work branch.
- - Tests cases can just be actions for someone to perform. Now those branches have migrated much
- of the front end. We now need to exhaustively test it to make sure things work. Log the issue
- and report to director
+ - We have a special request from our director Andrew that we need to execute
+ some on testing the doubtfire-web(quality/entity-service-to-npm) branch
+ with doubtfire-api(refactor/entity-service-backend), write up some test
+ scripts for people to run to verify that things work branch.
+ - Tests cases can just be actions for someone to perform. Now those branches
+ have migrated much of the front end. We now need to exhaustively test it to
+ make sure things work. Log the issue and report to director
2. Components Migration
- - There is **168 components** waiting to be migrated, in T3/2022, I hope we can continue the work
- that we left in previous trimester and assign some simple components for Juniors. Delivery lead
- should involve continuing the ongoing components, seniors should continue his work from
- previous trimester.
+ - There is **168 components** waiting to be migrated, in T3/2022, I hope we
+ can continue the work that we left in previous trimester and assign some
+ simple components for Juniors. Delivery lead should involve continuing the
+ ongoing components, seniors should continue his work from previous
+ trimester.
## Deliverables
@@ -40,7 +42,8 @@ trying to keep things more up to date going forward.
### Short Term
-- Gather information and continue the migration work that left from previous trimester.
+- Gather information and continue the migration work that left from previous
+ trimester.
- Develop and deliver at least 5 migrated components.
- Understand and plan for testing new branch
- Give Juniors OnTrack ASAP.
@@ -49,14 +52,15 @@ trying to keep things more up to date going forward.
**_Long Term:_**
- Give enough passion for junior to involve into the same project next trimester
-- Extent documentation for new member to be able to understand the project quickly.
+- Extent documentation for new member to be able to understand the project
+ quickly.
## **What to do next**
---
-If you are new member into this team, welcome! and we are going to guide you step by step to show
-you what you sould do next if you are:
+If you are new member into this team, welcome! and we are going to guide you
+step by step to show you what you sould do next if you are:
**_Junior_**
@@ -67,9 +71,10 @@ As a junior we suggest that you should:
(ISBN 978-1491916759)
- [Angular JS tutorial](https://www.youtube.com/playlist?list=PL6n9fhu94yhWKHkcL7RJmmXyxkuFB3KSl)
- _To be added in next-trimester_
-2. Set up the project on your local machine. For Windows user, you need to do an extra step to
- inistall a **WSL2 virtual machine**. Make sure you dont have VMware or VirtualBox installed that
- will enable the HyperV feature which **conflict** with Docker.
+2. Set up the project on your local machine. For Windows user, you need to do an
+ extra step to inistall a **WSL2 virtual machine**. Make sure you dont have
+ VMware or VirtualBox installed that will enable the HyperV feature which
+ **conflict** with Docker.
- **Windows (WSL2)**
1. Follow
[Docker Compose with WSL2](/products/ontrack/documentation/front-end-migration/deploy-ontrack/docker-compose-with-wsl2)
@@ -79,14 +84,15 @@ As a junior we suggest that you should:
guideline.
3. Watch
[Docker Setup Tutorial](https://drive.google.com/file/d/16A5zzG3g0S1B0PCKWrFK9anLhheXgi_b/view?usp=sharing)
- > Please note that the tutorial used Windows CMD enviroment, it should use WSL2 machine
- > instead. See the
+ > Please note that the tutorial used Windows CMD enviroment, it should
+ > use WSL2 machine instead. See the
> [issue](https://teams.microsoft.com/l/message/19:bd20175d09414f079490a2403f7fca74@thread.tacv2/1659408245022?tenantId=d02378ec-1688-46d5-8540-1c28b5f470f6&groupId=0e15669c-3f66-49aa-b023-640fe1dda2e0&parentMessageId=1659398288375&teamName=Thoth).
- **Mac / Linux**
1. Read
[Docker Setup Tutorial](/products/ontrack/documentation/front-end-migration/deploy-ontrack/docker-setup-tutorial)
guideline.
- > If the servers in the docker running into issues, please follow the backup plan -
+ > If the servers in the docker running into issues, please follow the
+ > backup plan -
> [Troubleshooting Docker - Backup for OnTrack](/products/ontrack/documentation/front-end-migration/deploy-ontrack/troubleshooting-docker-backup-for-ontrack)
3. Migration
1. Read
@@ -96,16 +102,19 @@ As a junior we suggest that you should:
[Regular Commit](/products/ontrack/documentation/front-end-migration/migration/regular-migration-step)
guideline.
4. Testing
- 1. Read [Unit Test](/products/ontrack/documentation/front-end-migration/testing/unit-test)
+ 1. Read
+ [Unit Test](/products/ontrack/documentation/front-end-migration/testing/unit-test)
guideline.
5. Do **report any issue** or questions to the senior or deilvery lead.
-6. **Writting docemnts** during the learning process which you found intresting or worth to know.
+6. **Writting docemnts** during the learning process which you found intresting
+ or worth to know.
7. Start to migrate some simple components.
**_Senior_**
1. Answer question from junior report the issue to the lead if no solution.
-2. Carry on the components that are in the middle of migrating in last trimester.
+2. Carry on the components that are in the middle of migrating in last
+ trimester.
---
diff --git a/src/content/docs/Products/OnTrack/Documentation/Incorporate Content Ontrack/design-document.md b/src/content/docs/Products/OnTrack/Documentation/Incorporate Content Ontrack/design-document.md
index 0263b8ea4..f17602d56 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Incorporate Content Ontrack/design-document.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Incorporate Content Ontrack/design-document.md
@@ -8,61 +8,66 @@ Company: Thoth Tech
## Introduction
-This document outlines the design approach for the feature called ‘Incorporate Content in OnTrack.’
-This feature will be implemented in Ontrack application, which is known as the main source for
-course management system for students. This feature aims to provide better flexibility and
-performance by allowing unit chair to host content within Ontrack for students to access. This
-feature will not only enable unit chair to host content within OnTrack but also organise content,
-search content and update content all within OnTrack. This design document will outline error
-handling and validations, testing and technical aspects of the application.
+This document outlines the design approach for the feature called ‘Incorporate
+Content in OnTrack.’ This feature will be implemented in Ontrack application,
+which is known as the main source for course management system for students.
+This feature aims to provide better flexibility and performance by allowing unit
+chair to host content within Ontrack for students to access. This feature will
+not only enable unit chair to host content within OnTrack but also organise
+content, search content and update content all within OnTrack. This design
+document will outline error handling and validations, testing and technical
+aspects of the application.
## User Story
-As a unit chair I want to host content within OnTrack and as a student I want to access the content
-within OnTrack.
+As a unit chair I want to host content within OnTrack and as a student I want to
+access the content within OnTrack.
## Architecture
-The feature ‘incorporate content in OnTrack’ will be smoothly added to the existing architecture of
-the system. This architecture will have frontend and backend component which will uphold the feature
-flexibility.
+The feature ‘incorporate content in OnTrack’ will be smoothly added to the
+existing architecture of the system. This architecture will have frontend and
+backend component which will uphold the feature flexibility.
## Frontend Implementation
-In the frontend, an additional button will be added under the dashboard for both unit chair and
-students. For unit chair, the button will open the content page within OnTrack, which allow unit
-chair to add content, and make any updates to the content which has been hosted. Additionally, unit
-chair can set time and date to when the content should go live, which falls under the organise
-content. For students, under the dashboard a new button will be added which will allow them to
-access the content as well as search any content and lastly download relevant content to their
-device.
+In the frontend, an additional button will be added under the dashboard for both
+unit chair and students. For unit chair, the button will open the content page
+within OnTrack, which allow unit chair to add content, and make any updates to
+the content which has been hosted. Additionally, unit chair can set time and
+date to when the content should go live, which falls under the organise content.
+For students, under the dashboard a new button will be added which will allow
+them to access the content as well as search any content and lastly download
+relevant content to their device.
### UI Integration
-When building the application, make sure the design follows the Tailwind CSS, which can provide
-better user-friendly experience. Additionally, use Angular component when adding the user interface
-element in the application.
+When building the application, make sure the design follows the Tailwind CSS,
+which can provide better user-friendly experience. Additionally, use Angular
+component when adding the user interface element in the application.
## Backend Implementation
-Make changes to the API endpoint which validates user inputs, along with those API that allows unit
-chair to host content, organise content, update content and also access permission. In terms of
-student, make changes to API which allows them to access the content, download content and search
-content.
+Make changes to the API endpoint which validates user inputs, along with those
+API that allows unit chair to host content, organise content, update content and
+also access permission. In terms of student, make changes to API which allows
+them to access the content, download content and search content.
### Database
-Should keep a record of when unit chair add content and last made changes to the content. Use Maria
-DB for data management.
+Should keep a record of when unit chair add content and last made changes to the
+content. Use Maria DB for data management.
### User Authentication
-Make sure that only authenticate unit chair for certain unit is allowed to host the content.
+Make sure that only authenticate unit chair for certain unit is allowed to host
+the content.
## Database Design
-- **Table:** Create a table called ‘ContentRecord’ which stores all the information about when the
- unit chair hosted content within OnTrack and when last modified.
+- **Table:** Create a table called ‘ContentRecord’ which stores all the
+ information about when the unit chair hosted content within OnTrack and when
+ last modified.
**Column:**
@@ -73,16 +78,17 @@ Make sure that only authenticate unit chair for certain unit is allowed to host
## Validation
-- **Error messages:** Display error messages in its relevant place when unit chair puts irrelevant
- date for when the content should be seen on the OnTrack application.
-- **API validation:** Validate that the input from the frontend is appropriate and that it meets the
- requirement.
+- **Error messages:** Display error messages in its relevant place when unit
+ chair puts irrelevant date for when the content should be seen on the OnTrack
+ application.
+- **API validation:** Validate that the input from the frontend is appropriate
+ and that it meets the requirement.
## Exception handling
-Ensure that server-side error is implemented in its place for handling errors that can occur during
-the process. Additionally, keep record of the errors that occur during the process for debugging
-purposes.
+Ensure that server-side error is implemented in its place for handling errors
+that can occur during the process. Additionally, keep record of the errors that
+occur during the process for debugging purposes.
## Testing
@@ -98,8 +104,8 @@ Steps:
1. Pick the date you want to upload the content.
1. Click Host.
-Expected outcome: Adding content is successful and now students will be able to view the content
-within OnTrack.
+Expected outcome: Adding content is successful and now students will be able to
+view the content within OnTrack.
### Test Case 2: Edit content within OnTrack
@@ -111,15 +117,18 @@ Steps:
1. Click on the content and start editing.
1. Click save once done.
-Expected Outcome: New version will be uploaded in OnTrack which can be viewed by the student.
+Expected Outcome: New version will be uploaded in OnTrack which can be viewed by
+the student.
### Test case 3: Unauthorized Access
-Description: Making sure that only unit chair can host the content within OnTrack.
+Description: Making sure that only unit chair can host the content within
+OnTrack.
Steps:
-1. If anyone attempts to access the host content functionality without any authentication.
+1. If anyone attempts to access the host content functionality without any
+ authentication.
Expected Outcome: A error message will display on the screen.
@@ -133,41 +142,47 @@ Steps:
1. Navigate to dashboard and click on content.
1. Start viewing certain content.
-Expected Outcome: It will allow students to have access to the content which was added by the unit
-chair. Additionally have the task sheet and the content in one place.
+Expected Outcome: It will allow students to have access to the content which was
+added by the unit chair. Additionally have the task sheet and the content in one
+place.
## Deployment
-Deployment plan will outline how steadily the new feature called ‘Incorporate Content in OnTrack’
-will be introduced in the current system of OnTrack.
+Deployment plan will outline how steadily the new feature called ‘Incorporate
+Content in OnTrack’ will be introduced in the current system of OnTrack.
-- Have a backup copy of the existing system for any risk that can occur during the deployment stage.
+- Have a backup copy of the existing system for any risk that can occur during
+ the deployment stage.
- Run testing for both frontend and backend to identify any issues.
- Ensure that the code follows the standards and its best practices.
-- Make announcement to students and unit chairs about the new feature, along with any downtime when
- the deployment process is taking place.
+- Make announcement to students and unit chairs about the new feature, along
+ with any downtime when the deployment process is taking place.
- Include instructions for unit chair on how to use the new feature.
-- Record the new feature during the first few days for any problems that can occur.
+- Record the new feature during the first few days for any problems that can
+ occur.
### Frontend Deployment
-Deploy frontend into production server and ensure that the new feature is smoothly integrated into
-the existing system.
+Deploy frontend into production server and ensure that the new feature is
+smoothly integrated into the existing system.
### Backend Deployment
-Deploy backend into production server and keep record of any errors that can occur during the
-process of deployment. Additionally ensure that the new feature is smoothly integrated into the
-existing system.
+Deploy backend into production server and keep record of any errors that can
+occur during the process of deployment. Additionally ensure that the new feature
+is smoothly integrated into the existing system.
## Conclusion
-The new feature ‘incorporate content in OnTrack’ will allow unit chairs to host content within
-OnTrack, which will be accessible by the students. Additionally, it will allow unit chair to make
-changes to the content that is already visible to the students, along with organising when certain
-content should be make visible in the OnTrack application. This design documents outlines how the
-feature will be integrated into OnTrack system. The design documents shows that performance,
-reliability, and usability will be upheld for better user experience.
-
-By incorporating this new feature within the existing OnTrack, allows students to view both the
-content and task sheet for that unit in place, which upheld the flexibility aspect of this feature.
+The new feature ‘incorporate content in OnTrack’ will allow unit chairs to host
+content within OnTrack, which will be accessible by the students. Additionally,
+it will allow unit chair to make changes to the content that is already visible
+to the students, along with organising when certain content should be make
+visible in the OnTrack application. This design documents outlines how the
+feature will be integrated into OnTrack system. The design documents shows that
+performance, reliability, and usability will be upheld for better user
+experience.
+
+By incorporating this new feature within the existing OnTrack, allows students
+to view both the content and task sheet for that unit in place, which upheld the
+flexibility aspect of this feature.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Incorporate Content Ontrack/gather-requirements.md b/src/content/docs/Products/OnTrack/Documentation/Incorporate Content Ontrack/gather-requirements.md
index f4bb187d5..afedbb097 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Incorporate Content Ontrack/gather-requirements.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Incorporate Content Ontrack/gather-requirements.md
@@ -8,15 +8,16 @@ Company: Thoth Tech
## Introduction
-This documentation outlines the requirements for implementing a feature called ‘Incorporate content
-in OnTrack.’ This feature will be implemented in the OnTrack application. This feature aims to
-provide better flexibility by allowing unit chair to add content in one place where it makes easier
-for students to access both the content and the task sheet for the unit in one place.
+This documentation outlines the requirements for implementing a feature called
+‘Incorporate content in OnTrack.’ This feature will be implemented in the
+OnTrack application. This feature aims to provide better flexibility by allowing
+unit chair to add content in one place where it makes easier for students to
+access both the content and the task sheet for the unit in one place.
## User Story
-As an OnTrack unit chair I want to be able to host content within Ontrack and as a student I want to
-able to access the content within Ontrack.
+As an OnTrack unit chair I want to be able to host content within Ontrack and as
+a student I want to able to access the content within Ontrack.
## Functional Requirements
@@ -27,26 +28,29 @@ able to access the content within Ontrack.
### Frontend
-- Under the dashboard an interface should be added to the unit chair Ontrack screen that will allow
- them to add content and hide certain content.
-- Under the dashboard an interface should be added in the student end that will allow them to view
- content.
+- Under the dashboard an interface should be added to the unit chair Ontrack
+ screen that will allow them to add content and hide certain content.
+- Under the dashboard an interface should be added in the student end that will
+ allow them to view content.
- An interface should be added for searching relevant content.
## Non-Functional Requirements
### Performance
-- The feature should run smoothly and provide the best experience for both students and unit chairs.
+- The feature should run smoothly and provide the best experience for both
+ students and unit chairs.
### Reliability
- The feature should be able to handle failures and should recover quickly.
-- ``Additionally, the feature should perform without failure for 95% of use cases.
+- ``Additionally, the feature should perform without failure for 95% of use
+ cases.
### Usability
-- The user interface should be consistent with its navigation and the overall design.
+- The user interface should be consistent with its navigation and the overall
+ design.
- The user interface should be user-friendly.
- The feature should require only minimum training for Unit chair.
@@ -64,8 +68,8 @@ Steps:
1. Pick the date you want to upload the content.
1. Click Host.
-Expected outcome: Adding content is successful and now students will be able to view the content
-within OnTrack.
+Expected outcome: Adding content is successful and now students will be able to
+view the content within OnTrack.
### Test Case 2: Edit content within OnTrack
@@ -77,15 +81,18 @@ Steps:
1. Click on the content and start editing.
1. Click save once done.
-Expected Outcome: New version will be uploaded in OnTrack which can be viewed by the student.
+Expected Outcome: New version will be uploaded in OnTrack which can be viewed by
+the student.
### Test case 3: Unauthorized Access
-Description: Making sure that only unit chair can host the content within OnTrack.
+Description: Making sure that only unit chair can host the content within
+OnTrack.
Steps:
-1. If anyone attempts to access the host content functionality without any authentication.
+1. If anyone attempts to access the host content functionality without any
+ authentication.
Expected Outcome: A error message will display on the screen.
@@ -99,8 +106,9 @@ Steps:
1. Navigate to dashboard and click on content.
1. Start viewing certain content.
-Expected Outcome: It will allow students to have access to the content which was added by the unit
-chair. Additionally have the task sheet and the content in one place.
+Expected Outcome: It will allow students to have access to the content which was
+added by the unit chair. Additionally have the task sheet and the content in one
+place.
## Testing
@@ -108,15 +116,18 @@ chair. Additionally have the task sheet and the content in one place.
- Opening terminal and navigate to the correct directory.
- Run the test.
- If all the tests are successful a message will display.
-- If the tests are failed an error message will display, along with a description of why it failed.
-- When troubleshooting, if the tests fail, review the error message, and identify what the issue
- might be.
+- If the tests are failed an error message will display, along with a
+ description of why it failed.
+- When troubleshooting, if the tests fail, review the error message, and
+ identify what the issue might be.
- Rerun the tests to see if it solved the issue.
## Conclusion
-To conclude, this new feature ‘Incorporate content in Ontrack’ will allow unit chairs to host
-content within OnTrack. Which can be accessible by the students who are enrolled in the unit.
-Therefore, this document outlines functional, non-functional requirements along with test cases
-which can be used for testing the system amongst various scenarios. This document also outlines what
-steps needs to be taken when performing the tests and how to handle issues when running the tests.
+To conclude, this new feature ‘Incorporate content in Ontrack’ will allow unit
+chairs to host content within OnTrack. Which can be accessible by the students
+who are enrolled in the unit. Therefore, this document outlines functional,
+non-functional requirements along with test cases which can be used for testing
+the system amongst various scenarios. This document also outlines what steps
+needs to be taken when performing the tests and how to handle issues when
+running the tests.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Incorporate Content Ontrack/incorporate-doc.md b/src/content/docs/Products/OnTrack/Documentation/Incorporate Content Ontrack/incorporate-doc.md
index 8bbb2380d..6738a6499 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Incorporate Content Ontrack/incorporate-doc.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Incorporate Content Ontrack/incorporate-doc.md
@@ -4,11 +4,13 @@ title: Incorporate content in OnTrack
Author: Devanshi Patel
-This documentation outlines the functionality and the way the new feature is going to be implemented
-on OnTrack that will enable unit chairs to host content on OnTrack, which can be accessible from the
-students end. This feature aims to provide better flexibility by adding the content in one place
-where it makes it easier for students to access both the content and the tasksheet for the unit in
-one place. This documentation will outline what needs to be done to achieve this feature.
+This documentation outlines the functionality and the way the new feature is
+going to be implemented on OnTrack that will enable unit chairs to host content
+on OnTrack, which can be accessible from the students end. This feature aims to
+provide better flexibility by adding the content in one place where it makes it
+easier for students to access both the content and the tasksheet for the unit in
+one place. This documentation will outline what needs to be done to achieve this
+feature.
## Requirements
@@ -20,23 +22,24 @@ one place. This documentation will outline what needs to be done to achieve this
- Design the overview of the front-end interface.
- Make adjustment to API, to ensure that the response is accurate.
-- Create a test case for front end that upheld the user experience of the feature.
+- Create a test case for front end that upheld the user experience of the
+ feature.
## UML Diagram
-- Create a UML diagram that will show the flow of incorporating content on Ontrack and the flow
- during the process.
-- How it will interact with front end, backend as well as with other features that are already in
- place.
+- Create a UML diagram that will show the flow of incorporating content on
+ Ontrack and the flow during the process.
+- How it will interact with front end, backend as well as with other features
+ that are already in place.
## Coding
-- Code the new feature, that will allow unit chairs to host content and make it accessible for
- students to view.
+- Code the new feature, that will allow unit chairs to host content and make it
+ accessible for students to view.
- Code the part where it will allow student to access content from their end.
## Testing
-- Create a test case for the backend, to ensure the data are correctly handled and ensure that the
- request is handle in correct manner.
+- Create a test case for the backend, to ensure the data are correctly handled
+ and ensure that the request is handle in correct manner.
- Create a documentation that will outline the steps on how to conduct the test.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Incorporate Content Ontrack/uml-diagram.md b/src/content/docs/Products/OnTrack/Documentation/Incorporate Content Ontrack/uml-diagram.md
index bd87459d0..8c34544bd 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Incorporate Content Ontrack/uml-diagram.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Incorporate Content Ontrack/uml-diagram.md
@@ -8,7 +8,8 @@ Company: Thoth Tech
## Introduction
-This Document outlines the flow of the new feature 'incorporate Content' into Ontrack.
+This Document outlines the flow of the new feature 'incorporate Content' into
+Ontrack.
## Use Case Diagram
diff --git a/src/content/docs/Products/OnTrack/Documentation/Jupyter Notebook/docker-containers-srs.md b/src/content/docs/Products/OnTrack/Documentation/Jupyter Notebook/docker-containers-srs.md
index 3ce36f26e..59ca50425 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Jupyter Notebook/docker-containers-srs.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Jupyter Notebook/docker-containers-srs.md
@@ -1,69 +1,77 @@
---
title:
- Jupyter Notebook/Word Document Docker Containers Software Requirement Specification (SRS) Document
+ Jupyter Notebook/Word Document Docker Containers Software Requirement
+ Specification (SRS) Document
---
## 1. Introduction
### 1.1 Purpose
-Currently, when an end user wishes to upload a Jupyter Notebook file to OnTrack, they must first
-manually convert the file to a PDF. The purpose of the Jupyter Notebook conversion feature is to
-automatically perform the conversion of Jupyter Notebook files to PDF during the submission process.
-During research for this feature it was determined that converting Word documents to PDF was an
-extensible feature of the Jupyter Notebook conversion feature. Depending on the type of file that an
-end user submits - if it is a Word document or a Jupyter Notebook file - one container will provide
-the conversion function for the Jupyter Notebook, and the other container will provide the
-conversion function for the Word document. Both containers will output a PDF file.
+Currently, when an end user wishes to upload a Jupyter Notebook file to OnTrack,
+they must first manually convert the file to a PDF. The purpose of the Jupyter
+Notebook conversion feature is to automatically perform the conversion of
+Jupyter Notebook files to PDF during the submission process. During research for
+this feature it was determined that converting Word documents to PDF was an
+extensible feature of the Jupyter Notebook conversion feature. Depending on the
+type of file that an end user submits - if it is a Word document or a Jupyter
+Notebook file - one container will provide the conversion function for the
+Jupyter Notebook, and the other container will provide the conversion function
+for the Word document. Both containers will output a PDF file.
### 1.2 Intended Audience
-The intended audience of this feature is all users of the OnTrack system (students and teachers).
-This feature will allow all users to submit Jupyter Notebook and Word Document files to OnTrack
-directly, instead of having to first manually convert the file to PDF and then submit that to
-OnTrack. The feature will also allow users to view their converted file for review or marking.
+The intended audience of this feature is all users of the OnTrack system
+(students and teachers). This feature will allow all users to submit Jupyter
+Notebook and Word Document files to OnTrack directly, instead of having to first
+manually convert the file to PDF and then submit that to OnTrack. The feature
+will also allow users to view their converted file for review or marking.
### 1.3 Intended Use
-The intended use of this feature is to provide the functionality for the mentioned conversions. The
-user will submit either a Jupyter Notebook or a Word document file to OnTrack, and each container
-will be used, depending on the file type, to make the necessary conversion of the submitted file to
-PDF format.
+The intended use of this feature is to provide the functionality for the
+mentioned conversions. The user will submit either a Jupyter Notebook or a Word
+document file to OnTrack, and each container will be used, depending on the file
+type, to make the necessary conversion of the submitted file to PDF format.
### 1.4 Scope
-This feature will be developed in steps: firstly we aim to develop standalone containers which
-provide the feature, secondly they will be integrated into OnTrack thus completing the feature. The
-scope during this trimester will be to create standalone Docker containers which can provide the
-function of converting Jupyter Notebook or Word Document files to PDF format. Also within scope is
+This feature will be developed in steps: firstly we aim to develop standalone
+containers which provide the feature, secondly they will be integrated into
+OnTrack thus completing the feature. The scope during this trimester will be to
+create standalone Docker containers which can provide the function of converting
+Jupyter Notebook or Word Document files to PDF format. Also within scope is
testing the containers and ensuring they conform to a testing strategy.
### 1.5 Definitions and Acronyms
-- OnTrack – an online learning management system that provides task work to users and allows them to
- submit work for feedback and assessment purposes.
-- DOCX/DOC - a DOCX/DOC file is a document created by Microsoft Word, a word processor. DOCX/DOC
- files typically contain text.
-- IPYNB – an IPYNB (IPython notebook) file is a document created by Jupyter Notebook, an interactive
- computational environment. IPYNB files can contain code input and output, formatted text,
- mathematical functions, and images.
-- PDF – a PDF (portable document format) file is a multi-platform document commonly used for saving
- documents to be viewed on multiple platforms.
-- Docker Container Image – a Docker Container Image is a lightweight, stand-alone, executable
- package of software that includes everything needed to run an application: code, runtime, system
- tools, system libraries and settings.
+- OnTrack – an online learning management system that provides task work to
+ users and allows them to submit work for feedback and assessment purposes.
+- DOCX/DOC - a DOCX/DOC file is a document created by Microsoft Word, a word
+ processor. DOCX/DOC files typically contain text.
+- IPYNB – an IPYNB (IPython notebook) file is a document created by Jupyter
+ Notebook, an interactive computational environment. IPYNB files can contain
+ code input and output, formatted text, mathematical functions, and images.
+- PDF – a PDF (portable document format) file is a multi-platform document
+ commonly used for saving documents to be viewed on multiple platforms.
+- Docker Container Image – a Docker Container Image is a lightweight,
+ stand-alone, executable package of software that includes everything needed to
+ run an application: code, runtime, system tools, system libraries and
+ settings.
## 2. Overall Description
### 2.1 User Needs
-Users will need to be able to directly submit Jupyter Notebook and Word document files to OnTrack.
-Both of these functionalities will be provided by the proposed containers.
+Users will need to be able to directly submit Jupyter Notebook and Word document
+files to OnTrack. Both of these functionalities will be provided by the proposed
+containers.
### 2.2 Assumptions and Dependencies
- It is assumed that:
- - The user submits a valid Jupyter Notebook/Word Document file to the OnTrack system.
+ - The user submits a valid Jupyter Notebook/Word Document file to the OnTrack
+ system.
- The user has a valid internet connection
- The user is aware of how to submit files to OnTrack
- The following dependencies are relied upon:
@@ -95,8 +103,10 @@ Both of these functionalities will be provided by the proposed containers.
### 3.4 Nonfunctional Requirements
-- Reliability – The ability of the system to consistently perform its required functions under
- stated conditions.
-- Scalability – ability of software to be scaled to encompass project scope in its entirety
-- Maintainability – ability of software to be maintained, ensuring consistent and upmost performance
+- Reliability – The ability of the system to consistently perform its required
+ functions under stated conditions.
+- Scalability – ability of software to be scaled to encompass project scope in
+ its entirety
+- Maintainability – ability of software to be maintained, ensuring consistent
+ and upmost performance
- Usability – User standards
diff --git a/src/content/docs/Products/OnTrack/Documentation/Jupyter Notebook/docker-documentation-research-t1-2022.md b/src/content/docs/Products/OnTrack/Documentation/Jupyter Notebook/docker-documentation-research-t1-2022.md
index 38d238ee5..7ca0195d3 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Jupyter Notebook/docker-documentation-research-t1-2022.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Jupyter Notebook/docker-documentation-research-t1-2022.md
@@ -6,21 +6,24 @@ title: Docker Documentation and Research (WIP) T1-2022
## Intro
-The Jupyter Notebook conversion feature will occur via processes inside Docker containers. In this
-document we will discuss the main architecture of this structure in relation to OnTrack.
+The Jupyter Notebook conversion feature will occur via processes inside Docker
+containers. In this document we will discuss the main architecture of this
+structure in relation to OnTrack.
OnTrack is deployed through two main containers, they are:
-- The front-end container which hosts the logic behind the user interface of OnTrack (mostly
- unrelated to this feature)
-- The back-end container which hosts the logic related to database interactions and other processes
+- The front-end container which hosts the logic behind the user interface of
+ OnTrack (mostly unrelated to this feature)
+- The back-end container which hosts the logic related to database interactions
+ and other processes
-A new container will be created to achieve the Jupyter Notebook conversion feature. This contains
-all of the dependencies (Python, TeX, etc.) needed for a .ipynb(Jupyter Notebook File) to PDF
-conversion, and performing the conversion process within this container. This allows us to have a
-standalone software package that is extendable if required, for example if new Python libraries are
-required by a unit. It also means that we are able to create as many standalone containers as
-required for different conversion processes, such as:
+A new container will be created to achieve the Jupyter Notebook conversion
+feature. This contains all of the dependencies (Python, TeX, etc.) needed for a
+.ipynb(Jupyter Notebook File) to PDF conversion, and performing the conversion
+process within this container. This allows us to have a standalone software
+package that is extendable if required, for example if new Python libraries are
+required by a unit. It also means that we are able to create as many standalone
+containers as required for different conversion processes, such as:
- Docx to PDF conversion using Apache POI
- Powerpoint presentation to PDF conversion using Apache POI
@@ -29,37 +32,43 @@ required for different conversion processes, such as:

-When the OnTrack front-end sends a new file to the OnTrack back-end, the back-end will be able to
-determine the file type, and if the file needs to be converted. If the OnTrack back-end recieves a
-.ipynb file, it will run the Jupyter to PDF conversion container to perform the conversion process.
+When the OnTrack front-end sends a new file to the OnTrack back-end, the
+back-end will be able to determine the file type, and if the file needs to be
+converted. If the OnTrack back-end recieves a .ipynb file, it will run the
+Jupyter to PDF conversion container to perform the conversion process.
This is done via a shell command that does several things:
-1. Firstly, it ensures that the file to be converted is renamed to "input.ipynb".
-2. It then instructs the container to run AND it mounts the file's directory as a Docker volume. As
- the container runs it will execute the conversion process via its `ENTRYPOINT` command. The
- converted file will be output to the volume. After the container has run, it will be removed.
+1. Firstly, it ensures that the file to be converted is renamed to
+ "input.ipynb".
+2. It then instructs the container to run AND it mounts the file's directory as
+ a Docker volume. As the container runs it will execute the conversion process
+ via its `ENTRYPOINT` command. The converted file will be output to the
+ volume. After the container has run, it will be removed.
3. Finally, it removes the temporary files to ensure there is always free space.
-The `docker run` command provides several options for this type of use-case: we are able to specify
-a volume using the `-v` option, and we are able to ask that the container is removed after it has
-finished its process with the `--rm` option.
+The `docker run` command provides several options for this type of use-case: we
+are able to specify a volume using the `-v` option, and we are able to ask that
+the container is removed after it has finished its process with the `--rm`
+option.
-**Note: we may need to have some process in place to read whether the conversion was a success or a
-failure.**
+**Note: we may need to have some process in place to read whether the conversion
+was a success or a failure.**
## Conversion Container Requirements
-For a container to function in this architecture, there are some requirements that have to be met:
-
-- The container will run the conversion as its `ENTRYPOINT` command _without any user input
- required_.
-- The container will always look in a single directory (mounted as a volume when the container is
- run) for **one** file that will be called "input" (+ whatever file extension is required).
-- The container will always output the converted file to the **same** directory (mounted as a volumn
- when the container is run) and call it "output.pdf".
-
-These requirements allow the containers to remain isolated while the file conversion logic is
-handled seperately by the OnTrack backend. This is necessary to follow the Single-responsibility
-Principle: the container itself is responsible for only one task - that is, performing the
-conversion process.
+For a container to function in this architecture, there are some requirements
+that have to be met:
+
+- The container will run the conversion as its `ENTRYPOINT` command _without any
+ user input required_.
+- The container will always look in a single directory (mounted as a volume when
+ the container is run) for **one** file that will be called "input" (+ whatever
+ file extension is required).
+- The container will always output the converted file to the **same** directory
+ (mounted as a volumn when the container is run) and call it "output.pdf".
+
+These requirements allow the containers to remain isolated while the file
+conversion logic is handled seperately by the OnTrack backend. This is necessary
+to follow the Single-responsibility Principle: the container itself is
+responsible for only one task - that is, performing the conversion process.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Jupyter Notebook/jupyter-notebook-epic-t1-2022.md b/src/content/docs/Products/OnTrack/Documentation/Jupyter Notebook/jupyter-notebook-epic-t1-2022.md
index 747389cca..44ca1aed7 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Jupyter Notebook/jupyter-notebook-epic-t1-2022.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Jupyter Notebook/jupyter-notebook-epic-t1-2022.md
@@ -4,19 +4,20 @@ title: Jupyter Notebook Support Epic
### Background
-- Students who use OnTrack currently are limited in the way they can upload their Jupyter notebook
- files to OnTrack. Students must download their notebook as a HTML through Jupyter notebook and use
- an online PDF converter to convert the HTML file to PDF, so their work can be submitted to
- OnTrack.
+- Students who use OnTrack currently are limited in the way they can upload
+ their Jupyter notebook files to OnTrack. Students must download their notebook
+ as a HTML through Jupyter notebook and use an online PDF converter to convert
+ the HTML file to PDF, so their work can be submitted to OnTrack.
### Business Value
-- To minimise students needing to use outside sources to be able to upload their work, students must
- be able to download their work from a Jupyter notebook, which is saved as a file with ‘.ipynb’
- file, and upload to OnTrack, where it will be converted to a PDF for submission. This will make it
- much more efficient for students to upload their work, which will save time for tutors having to
- explain the process of having to outsource to a PDF conversion website. This will increase trust
- in our product and improve user experience.
+- To minimise students needing to use outside sources to be able to upload their
+ work, students must be able to download their work from a Jupyter notebook,
+ which is saved as a file with ‘.ipynb’ file, and upload to OnTrack, where it
+ will be converted to a PDF for submission. This will make it much more
+ efficient for students to upload their work, which will save time for tutors
+ having to explain the process of having to outsource to a PDF conversion
+ website. This will increase trust in our product and improve user experience.
### In scope
@@ -31,7 +32,8 @@ title: Jupyter Notebook Support Epic
### What needs to happen
-- Review of existing work to find what is usable and then plan out what work is still to be done.
+- Review of existing work to find what is usable and then plan out what work is
+ still to be done.
- Allow for ipynb files to be accepted for Ontrack
- Convert ipynb files to PDF via LaTeX
- Test files are converted from ipynb to PDF
@@ -51,16 +53,17 @@ title: Jupyter Notebook Support Epic
### Operations/Support
-- Team members may need training/upskilling in technologies such as Ruby, Ruby on Rails, Docker and
- GitHub.
-- User guide documentation will need to be release simultaneously with the new feature, so users
- know how to use this new feature.
+- Team members may need training/upskilling in technologies such as Ruby, Ruby
+ on Rails, Docker and GitHub.
+- User guide documentation will need to be release simultaneously with the new
+ feature, so users know how to use this new feature.
### What are the challenges?
-- Team members will need to understand existing code contributions and understand what is usable to
- integrate their Jupyter notebook support project work. Team members may also need to learn new
- technologies used in the integration of Jupyter notebook support
+- Team members will need to understand existing code contributions and
+ understand what is usable to integrate their Jupyter notebook support project
+ work. Team members may also need to learn new technologies used in the
+ integration of Jupyter notebook support
### Acceptance criteria
diff --git a/src/content/docs/Products/OnTrack/Documentation/Jupyter Notebook/prototype-srs-software-requirements-specification.md b/src/content/docs/Products/OnTrack/Documentation/Jupyter Notebook/prototype-srs-software-requirements-specification.md
index 29112c1ab..789e91e78 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Jupyter Notebook/prototype-srs-software-requirements-specification.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Jupyter Notebook/prototype-srs-software-requirements-specification.md
@@ -1,5 +1,6 @@
---
-title: Jupyter Notebook Week 6 Prototype - Software Requirement Specification (SRS)
+title:
+ Jupyter Notebook Week 6 Prototype - Software Requirement Specification (SRS)
---
- [Back to Jupyter Notebook Documentation Index](/products/ontrack/documentation/jupyter-notebook)
@@ -8,10 +9,11 @@ title: Jupyter Notebook Week 6 Prototype - Software Requirement Specification (S
### 1.1 Purpose
-- The purpose of this prototype is to ensure the code written thus far by team members sufficiently
- completes the two key functionalities of converting both Jupyter Notebook files and Word document
- files into PDF format. The successful implementation will ensure the team is on the right track in
- terms of both their ideas and their coding work.
+- The purpose of this prototype is to ensure the code written thus far by team
+ members sufficiently completes the two key functionalities of converting both
+ Jupyter Notebook files and Word document files into PDF format. The successful
+ implementation will ensure the team is on the right track in terms of both
+ their ideas and their coding work.
### 1.2 Intended Audience
@@ -20,52 +22,57 @@ title: Jupyter Notebook Week 6 Prototype - Software Requirement Specification (S
### 1.3 Intended Use
-- The intended use of this prototype is to be a more-basic first draft of the logic the team wishes
- to implement in the final product of the project scope. It will test the conversion to PDF for
- both Jupyter Notebook and Word document files. The successful implementation of this prototype
- will give the team the knowledge that the code they’ve written works on a smaller scale than
- OnTrack.
+- The intended use of this prototype is to be a more-basic first draft of the
+ logic the team wishes to implement in the final product of the project scope.
+ It will test the conversion to PDF for both Jupyter Notebook and Word document
+ files. The successful implementation of this prototype will give the team the
+ knowledge that the code they’ve written works on a smaller scale than OnTrack.
### 1.4 Scope
-- The scope of the prototype is to create a simple front-end interface similar to that of OnTrack
- and, through the use of command-line commands, allow for the following functionalities:
+- The scope of the prototype is to create a simple front-end interface similar
+ to that of OnTrack and, through the use of command-line commands, allow for
+ the following functionalities:
- Converting a Jupyter Notebook file into PDF format
- Converting a Word file into PDF format
### 1.5 Definitions and Acronyms
-- Jupyter Notebook – a web application that allows users write and run live code, equations,
- visualizations, and plain text in various languages.
+- Jupyter Notebook – a web application that allows users write and run live
+ code, equations, visualizations, and plain text in various languages.
- .ipynb – Jupyter Notebook file type
- .doc – Microsoft Word file
- .docx – Microsoft Word file
- PDF – Portable Document Format
-- Library – A library is a collection of pre-written code that provide further access to system
- functionality such as file I/O that would otherwise be inaccessible. This is done importing the
- library at the beginning of the program.
+- Library – A library is a collection of pre-written code that provide further
+ access to system functionality such as file I/O that would otherwise be
+ inaccessible. This is done importing the library at the beginning of the
+ program.
- HTML – HyperText Markup Language
- RubyOnRails – a server-side web application framework written in Ruby.
-- Nbconvert – is a library of pre-written code used to convert Jupyter Notebook file to PDF.
-- Backend – Is development that happens behind the scenes, it is all the parts of a computer system
- or application that is not directly accessed by the user, it is responsible for storing and
- manipulating data through code.
-- Frontend – Is development on what the user can see and/or directly interact with (i.e., what can
- be seen on the computer screen, such as a window, or buttons and input fields/boxes)
+- Nbconvert – is a library of pre-written code used to convert Jupyter Notebook
+ file to PDF.
+- Backend – Is development that happens behind the scenes, it is all the parts
+ of a computer system or application that is not directly accessed by the user,
+ it is responsible for storing and manipulating data through code.
+- Frontend – Is development on what the user can see and/or directly interact
+ with (i.e., what can be seen on the computer screen, such as a window, or
+ buttons and input fields/boxes)
## 2. Overall Description
### 2.1 User Needs
-- As a student, I want to be able to upload Jupyter Notebook (.ipynb) files without having to go
- through the extra step of converting them to a PDF first.
-- As a tutor, I want students to be able to upload any file they are working on so they can focus on
- the quality of the work.
+- As a student, I want to be able to upload Jupyter Notebook (.ipynb) files
+ without having to go through the extra step of converting them to a PDF first.
+- As a tutor, I want students to be able to upload any file they are working on
+ so they can focus on the quality of the work.
### 2.2 Assumptions and Dependencies
- Assumptions include:
- - The user has a working and valid Jupyter Notebook/Word Document ready to be converted to PDF.
+ - The user has a working and valid Jupyter Notebook/Word Document ready to be
+ converted to PDF.
- The user wants the input file to be converted as uploaded.
- The user has access to OnTrack.
- Key project member’s availability
@@ -74,8 +81,8 @@ title: Jupyter Notebook Week 6 Prototype - Software Requirement Specification (S
- Dependencies include:
- Input of a valid Jupyter Notebook/Word Document file.
- A valid internet connection, to interact with OnTrack environment
- - A Docker container must be created first before testing of Jupyter Notebook/Word Document
- conversion can be tested.
+ - A Docker container must be created first before testing of Jupyter
+ Notebook/Word Document conversion can be tested.
- Approval of project expansion must be given before work on expansion begins
## 3. System Features and Requirements
@@ -102,9 +109,11 @@ title: Jupyter Notebook Week 6 Prototype - Software Requirement Specification (S
### 3.4 Non-functional Requirements
- Usability – User standards
-- Scalability – ability of software to be scaled to encompass project scope in its entirety
-- Maintainability – ability of software to be maintained, ensuring consistent and upmost performance
-- Reliability – The ability of the system to consistently perform its required functions under
- stated conditions.
-- Documentation – User documentation, testing results, meeting minutes and notes, contribution
- notes, discussions.
+- Scalability – ability of software to be scaled to encompass project scope in
+ its entirety
+- Maintainability – ability of software to be maintained, ensuring consistent
+ and upmost performance
+- Reliability – The ability of the system to consistently perform its required
+ functions under stated conditions.
+- Documentation – User documentation, testing results, meeting minutes and
+ notes, contribution notes, discussions.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Multiple Organisations/design-documentation.md b/src/content/docs/Products/OnTrack/Documentation/Multiple Organisations/design-documentation.md
index 35787f5b1..c71ea500b 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Multiple Organisations/design-documentation.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Multiple Organisations/design-documentation.md
@@ -8,21 +8,23 @@ Company: Thoth Tech
## Introduction
-This design document outlines the approach for incorporating multiple organisations into the OnTrack
-server, enhancing its functionality to accommodate various organisations within a single system
-instance. The goal is to provide a comprehensive solution that enables effective organisation
-management, user assignment, and access control while maintaining the integrity and security of the
-OnTrack system.
+This design document outlines the approach for incorporating multiple
+organisations into the OnTrack server, enhancing its functionality to
+accommodate various organisations within a single system instance. The goal is
+to provide a comprehensive solution that enables effective organisation
+management, user assignment, and access control while maintaining the integrity
+and security of the OnTrack system.
## User Story
-As a Site administrator, I want to be able to manage multiple organisations within the OnTrack
-system. This will allow me to efficiently organise users and data, ensuring that each organisation
-operates independently.
+As a Site administrator, I want to be able to manage multiple organisations
+within the OnTrack system. This will allow me to efficiently organise users and
+data, ensuring that each organisation operates independently.
## Acceptance Criteria
-- Site administrators can create new organisations, providing details such as name and description.
+- Site administrators can create new organisations, providing details such as
+ name and description.
- Organisations can be edited to update their details.
- Organisations can be disabled when they are no longer in use.
- Users can be associated with specific organisations.
@@ -35,25 +37,30 @@ operates independently.
The frontend of the system incorporates the following components:
-- Organisation Management UI: This component allows administrators to create, edit, and disable
- organisations. It includes a user-friendly interface for managing organisation details.
-- User Management UI: Users are associated with organisations through this interface. It provides a
- seamless experience for assigning users to organisations and managing user profiles.
-- Organisation Switching UI: Users with access to multiple organisations can easily switch between
- them using this interface. It ensures a smooth transition from one organisation's context to
- another.
+- Organisation Management UI: This component allows administrators to create,
+ edit, and disable organisations. It includes a user-friendly interface for
+ managing organisation details.
+- User Management UI: Users are associated with organisations through this
+ interface. It provides a seamless experience for assigning users to
+ organisations and managing user profiles.
+- Organisation Switching UI: Users with access to multiple organisations can
+ easily switch between them using this interface. It ensures a smooth
+ transition from one organisation's context to another.
## Backend Architecture
The backend of the system handles data management and access control:
-- Organisation Management: Backend services manage the creation, editing, and disabling of
- organisations. Data is stored securely in the database, and appropriate permissions are enforced.
-- User Organisation Assignment: Backend processes allow site administrators to associate users with
- specific organisations. These associations are maintained in the database.
-- Access Control: The backend enforces access control rules to ensure that users can only access
- data within their associated organisation. This is achieved through role-based access control
- mechanisms where site administrator is a new role.
+- Organisation Management: Backend services manage the creation, editing, and
+ disabling of organisations. Data is stored securely in the database, and
+ appropriate permissions are enforced.
+- User Organisation Assignment: Backend processes allow site administrators to
+ associate users with specific organisations. These associations are maintained
+ in the database.
+- Access Control: The backend enforces access control rules to ensure that users
+ can only access data within their associated organisation. This is achieved
+ through role-based access control mechanisms where site administrator is a new
+ role.
## Technical Implementation-
@@ -67,9 +74,10 @@ Organisation Management UI:
User Management UI:
-- Develop interfaces for assigning users to organisations and managing user profiles.
-- Implement user-friendly features for easy association and disassociation of users with
- organisations.
+- Develop interfaces for assigning users to organisations and managing user
+ profiles.
+- Implement user-friendly features for easy association and disassociation of
+ users with organisations.
Organisation Switching UI:
@@ -86,25 +94,29 @@ Organisation Management:
User Organisation Assignment:
-- Create API endpoints for associating and disassociating users with organisations.
+- Create API endpoints for associating and disassociating users with
+ organisations.
- Implement validation checks to prevent unauthorized assignments.
- Update user profiles to reflect organisation associations.
Access Control:
-- Enforce access control rules based on user roles and organisation associations.
+- Enforce access control rules based on user roles and organisation
+ associations.
- Implement middleware to check permissions before granting access to data.
- Securely manage data queries to ensure isolation between organisations.
-- The technical implementation aims to provide a seamless user experience while ensuring data
- security and access control across multiple organisations.
+- The technical implementation aims to provide a seamless user experience while
+ ensuring data security and access control across multiple organisations.
## Database Design
-The database design ensures that organisations can be efficiently managed, users can be associated
-with organisations, and access control can be enforced based on these associations. The database
-design for incorporating multiple organisations includes the following elements:
+The database design ensures that organisations can be efficiently managed, users
+can be associated with organisations, and access control can be enforced based
+on these associations. The database design for incorporating multiple
+organisations includes the following elements:
-- Organisations Table: Create a table named ‘organisations’ to store organisation-specific details.
+- Organisations Table: Create a table named ‘organisations’ to store
+ organisation-specific details.
Columns:
@@ -114,28 +126,33 @@ Columns:
- Email: official email id of the organisation.
- is_enabled: Flag indicating whether the organisation is active or disabled.
-- Users Table: Update the existing ‘users’ table to include organisation_id as the foreign key.
+- Users Table: Update the existing ‘users’ table to include organisation_id as
+ the foreign key.
## Error Handling and Validation
-Robust error handling and validation mechanisms are essential for ensuring data integrity and user
-satisfaction. Frontend and backend components should implement validation checks and provide clear
-error messages to users.
+Robust error handling and validation mechanisms are essential for ensuring data
+integrity and user satisfaction. Frontend and backend components should
+implement validation checks and provide clear error messages to users.
## Testing Strategy
-Testing is crucial to verify the functionality and security of the system. Both frontend and backend
-components should undergo thorough testing to identify and address issues.
+Testing is crucial to verify the functionality and security of the system. Both
+frontend and backend components should undergo thorough testing to identify and
+address issues.
## Deployment Plan
-The deployment plan outlines the steps for introducing the multiple organisations feature into the
-OnTrack system, ensuring a smooth transition for users.
+The deployment plan outlines the steps for introducing the multiple
+organisations feature into the OnTrack system, ensuring a smooth transition for
+users.
## Conclusion
-The incorporation of multiple organisations into the OnTrack server is a significant enhancement
-that enhances the system's flexibility and scalability. By following the design outlined in this
-document and implementing it effectively, OnTrack will provide a powerful solution for managing
-multiple organisations while maintaining data security and access control. This design document
-serves as a roadmap for achieving these goals and delivering a feature-rich, user-friendly system.
+The incorporation of multiple organisations into the OnTrack server is a
+significant enhancement that enhances the system's flexibility and scalability.
+By following the design outlined in this document and implementing it
+effectively, OnTrack will provide a powerful solution for managing multiple
+organisations while maintaining data security and access control. This design
+document serves as a roadmap for achieving these goals and delivering a
+feature-rich, user-friendly system.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Multiple Organisations/gather-requirements-for-multiple-organisations.md b/src/content/docs/Products/OnTrack/Documentation/Multiple Organisations/gather-requirements-for-multiple-organisations.md
index 8b2f62069..5855ff980 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Multiple Organisations/gather-requirements-for-multiple-organisations.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Multiple Organisations/gather-requirements-for-multiple-organisations.md
@@ -8,55 +8,61 @@ title: Incorporate Multiple Organisations on a Single OnTrack Server
## Introduction
-This documentation outlines the requirements for implementing a feature that enables the
-ncorporation of multiple organisations within a single OnTrack server. This feature aims to enhance
-the administrative capabilities of the OnTrack application by allowing server operators to manage
-and segregate multiple organisations efficiently.
+This documentation outlines the requirements for implementing a feature that
+enables the ncorporation of multiple organisations within a single OnTrack
+server. This feature aims to enhance the administrative capabilities of the
+OnTrack application by allowing server operators to manage and segregate
+multiple organisations efficiently.
## User Story
-As an OnTrack server operator, I want to be able to host multiple organisations within my server.
+As an OnTrack server operator, I want to be able to host multiple organisations
+within my server.
## Functional Requirements
### Backend
-- Design a flexible organisational structure to accommodate multiple organisations.
-- Develop functionality to create, edit, and disable organisations, including providing a unique
- identifier for each organisation.
-- Implement a new user role named "Site Administrator" with permissions to manage organisations,
- including the ability to add, disable, and edit them.
-- Enhance the user profile system to associate users with specific organisations and allow users to
- switch between organisations.
+- Design a flexible organisational structure to accommodate multiple
+ organisations.
+- Develop functionality to create, edit, and disable organisations, including
+ providing a unique identifier for each organisation.
+- Implement a new user role named "Site Administrator" with permissions to
+ manage organisations, including the ability to add, disable, and edit them.
+- Enhance the user profile system to associate users with specific organisations
+ and allow users to switch between organisations.
### Frontend
-- Design intuitive user interfaces for organisation creation, modification, and disabling.
-- Create a dedicated dashboard for Site Administrators to manage organisations, including options to
- add, disable, and edit organisations.
-- Update user profile pages to display and allow modification of the associated organisation.
+- Design intuitive user interfaces for organisation creation, modification, and
+ disabling.
+- Create a dedicated dashboard for Site Administrators to manage organisations,
+ including options to add, disable, and edit organisations.
+- Update user profile pages to display and allow modification of the associated
+ organisation.
## Non-Functional Requirements
### Performance
-- Ensure that the system can handle a significant number of organisations and users without
- compromising performance.
-- Optimize database queries and access patterns to maintain responsive user experience even with
- increased organisational complexity.
+- Ensure that the system can handle a significant number of organisations and
+ users without compromising performance.
+- Optimize database queries and access patterns to maintain responsive user
+ experience even with increased organisational complexity.
### Reliability
-- Implement data isolation mechanisms to prevent cross-organisation data leaks or unauthorized
- access.
-- Apply robust error handling to prevent disruptions due to organisational changes.
+- Implement data isolation mechanisms to prevent cross-organisation data leaks
+ or unauthorized access.
+- Apply robust error handling to prevent disruptions due to organisational
+ changes.
## Test Cases
### Test Case 1:Organisation Creation
-Description: Verify the system allows the creation of a new organisation with a unique name and
-identifier.
+Description: Verify the system allows the creation of a new organisation with a
+unique name and identifier.
### Steps
@@ -105,7 +111,8 @@ The user is now associated with the selected organisation.
### Test Case 4: Access Control\*\*
-Description: Verify users can access only the resources within their organisation.
+Description: Verify users can access only the resources within their
+organisation.
### Steps``
@@ -114,19 +121,23 @@ Description: Verify users can access only the resources within their organisatio
### `Expected Outcome``
-Access is denied, and the user can only access resources within their own organisation.
+Access is denied, and the user can only access resources within their own
+organisation.
### Testing``
-- Perform unit testing on each component, ensuring that organisation-related functionalities work as
- expected.
-- Conduct integration testing to ensure smooth interaction between different parts of the system.
-- Implement user acceptance testing involving Site Administrators and regular users to validate the
- feature's usability and correctness.
+- Perform unit testing on each component, ensuring that organisation-related
+ functionalities work as expected.
+- Conduct integration testing to ensure smooth interaction between different
+ parts of the system.
+- Implement user acceptance testing involving Site Administrators and regular
+ users to validate the feature's usability and correctness.
## Conclusion
-In conclusion, the incorporation of multiple organisations within a single OnTrack server brings a
-significant enhancement to the application's administrative capabilities. By following the outlined
-requirements and test cases, this feature will enable server operators to effectively manage and
-segregate various organisations, ensuring a more streamlined and organised user experience.
+In conclusion, the incorporation of multiple organisations within a single
+OnTrack server brings a significant enhancement to the application's
+administrative capabilities. By following the outlined requirements and test
+cases, this feature will enable server operators to effectively manage and
+segregate various organisations, ensuring a more streamlined and organised user
+experience.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Multiple Organisations/test-scenario-requirements.md b/src/content/docs/Products/OnTrack/Documentation/Multiple Organisations/test-scenario-requirements.md
index 8854e8537..50fb2b916 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Multiple Organisations/test-scenario-requirements.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Multiple Organisations/test-scenario-requirements.md
@@ -8,8 +8,9 @@ title: Test Cases for Incorporating Multiple Organisations on OnTrack Server
## Introduction
-This document outlines test cases for incorporating multiple organisations on the OnTrack server,
-enhancing its functionality to accommodate various organizations within a single system instance.
+This document outlines test cases for incorporating multiple organisations on
+the OnTrack server, enhancing its functionality to accommodate various
+organizations within a single system instance.
## Backend Functionality
@@ -29,8 +30,8 @@ Steps:
4\. Submit the form.
-Expected Outcome: A new organisation is created, and it appears in the list of organisations managed
-by the Site Administrator.
+Expected Outcome: A new organisation is created, and it appears in the list of
+organisations managed by the Site Administrator.
Test Case 2: Editing an Organisation
@@ -48,8 +49,8 @@ Steps:
5\. Save the changes.
-Expected Outcome: The organisation's details are updated, and the changes are reflected in the
-system.
+Expected Outcome: The organisation's details are updated, and the changes are
+reflected in the system.
Test Case 3: Disabling an Organisation
@@ -65,14 +66,15 @@ Steps:
4\. Disable the organisation.
-Expected Outcome: The organisation is disabled and no longer accessible to users. It is removed from
-active use but remains in the system for reference.
+Expected Outcome: The organisation is disabled and no longer accessible to
+users. It is removed from active use but remains in the system for reference.
## User Organisation Assignment
Test Case 4: Associating a User with an Organisation
-Description: Test the capability to associate a user with a specific organisation.
+Description: Test the capability to associate a user with a specific
+organisation.
Steps:
@@ -84,13 +86,13 @@ Steps:
4\. Assign the user to an organisation.
-Expected Outcome: The user is associated with the chosen organisation, and their profile reflects
-the change.
+Expected Outcome: The user is associated with the chosen organisation, and their
+profile reflects the change.
Test Case 5: User Switching Between Organisations
-Description: Confirm that users can successfully switch between organisations when they have access
-to multiple organizations.
+Description: Confirm that users can successfully switch between organisations
+when they have access to multiple organizations.
Steps:
@@ -100,8 +102,8 @@ Steps:
3\. Select a different organisation to switch to.
-Expected Outcome: The user's context changes to the selected organisation, and they can access its
-resources and functionalities.
+Expected Outcome: The user's context changes to the selected organisation, and
+they can access its resources and functionalities.
## Access Control
@@ -109,8 +111,8 @@ resources and functionalities.
Test Case 6: User Data Access Control
-Description: Ensure that users can access data only within their associated organisation and are
-restricted from accessing data from other organisations.
+Description: Ensure that users can access data only within their associated
+organisation and are restricted from accessing data from other organisations.
Steps:
@@ -118,12 +120,14 @@ Steps:
2\. Attempt to access resources belonging to Organisation B.
-Expected Outcome: Access to resources of Organisation B is denied for the user from Organisation A.
+Expected Outcome: Access to resources of Organisation B is denied for the user
+from Organisation A.
Test Case 7: Site Administrator Data Access Control
-Description: Verify that Site Administrators can access data only from their organisation while
-being restricted from accessing data outside their organisation.
+Description: Verify that Site Administrators can access data only from their
+organisation while being restricted from accessing data outside their
+organisation.
Steps:
@@ -131,8 +135,8 @@ Steps:
2\. Attempt to access resources belonging to Organisation B.
-Expected Outcome: Access to resources of Organisation B is denied for the Site Administrator from
-Organisation A.
+Expected Outcome: Access to resources of Organisation B is denied for the Site
+Administrator from Organisation A.
## Frontend Functionality
@@ -152,12 +156,13 @@ Steps:
4\. Submit the form.
-Expected Outcome: A new organisation is created, and it is displayed in the list of organisations
-managed by the Site Administrator.
+Expected Outcome: A new organisation is created, and it is displayed in the list
+of organisations managed by the Site Administrator.
Test Case 9: Frontend - Editing an Organisation
-Description: Test the frontend capability to edit an existing organisation's details
+Description: Test the frontend capability to edit an existing organisation's
+details
Steps:
@@ -171,8 +176,8 @@ Steps:
5\. Save the changes.
-Expected Outcome: The organisation's details are updated in the frontend, and the changes are
-reflected in the system.
+Expected Outcome: The organisation's details are updated in the frontend, and
+the changes are reflected in the system.
Test Case 10: Frontend - Disabling an Organisation
@@ -188,14 +193,15 @@ Steps:
4\. Disable the organisation using the frontend interface.
-Expected Outcome: The organisation is visually disabled and no longer accessible to users via the
-frontend. It remains in the system for reference.
+Expected Outcome: The organisation is visually disabled and no longer accessible
+to users via the frontend. It remains in the system for reference.
## User Organisation Assignment``
Test Case 11: Frontend - Associating a User with an Organisation
-Description: Test the frontend functionality to associate a user with a specific organisation.
+Description: Test the frontend functionality to associate a user with a specific
+organisation.
Steps:
@@ -207,13 +213,13 @@ Steps:
4\. Assign the user to an organisation using the frontend interface.
-Expected Outcome: The user's association with the chosen organisation is visually represented in the
-frontend, and their profile reflects the change.
+Expected Outcome: The user's association with the chosen organisation is
+visually represented in the frontend, and their profile reflects the change.
Test Case 12: Frontend - User Switching Between Organisations
-Description: Confirm that users can successfully switch between organisations via the frontend when
-they have access to multiple organisations.
+Description: Confirm that users can successfully switch between organisations
+via the frontend when they have access to multiple organisations.
Steps:
@@ -223,10 +229,12 @@ Steps:
3\. Select a different organisation to switch to.
-Expected Outcome: The user's context visually changes to the selected organisation in the frontend,
-and they can access its resources and functionalities.
+Expected Outcome: The user's context visually changes to the selected
+organisation in the frontend, and they can access its resources and
+functionalities.
## Conclusion
-These test cases cover both backend and frontend functionalities comprehensively to ensure that the
-multi-organisation feature functions correctly and provides a seamless experience for the users.
+These test cases cover both backend and frontend functionalities comprehensively
+to ensure that the multi-organisation feature functions correctly and provides a
+seamless experience for the users.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Multiple Organisations/uml-design.md b/src/content/docs/Products/OnTrack/Documentation/Multiple Organisations/uml-design.md
index 5338b5267..1660daae6 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Multiple Organisations/uml-design.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Multiple Organisations/uml-design.md
@@ -1,5 +1,7 @@
---
-title: Requirements to incorporate multiple organisations on a single OnTrack server – UML Design
+title:
+ Requirements to incorporate multiple organisations on a single OnTrack server
+ – UML Design
---
Author: Sanah Quazi
@@ -8,35 +10,41 @@ Company: Thoth tech
![a]/(uml.png)
-The UML diagram presented above focuses on achieving efficient organisation management, user
-association with organisations, and robust access control. It outlines the database structure for
-accommodating multiple organisations, which consists of two main elements:
+The UML diagram presented above focuses on achieving efficient organisation
+management, user association with organisations, and robust access control. It
+outlines the database structure for accommodating multiple organisations, which
+consists of two main elements:
-Organisations Table: This table, labelled 'organisations,' is responsible for storing
-organisation-specific information. It includes the following key columns:
+Organisations Table: This table, labelled 'organisations,' is responsible for
+storing organisation-specific information. It includes the following key
+columns:
-- organisation_id: This column serves as a unique identifier for each organisation and acts as the
- primary key for this table.
+- organisation_id: This column serves as a unique identifier for each
+ organisation and acts as the primary key for this table.
- name: The 'name' column holds the organisation's name.
-- description: In the 'description' column, you can find detailed descriptions of each organisation.
-- email: This column stores the official email address associated with the organisation.
-- is_enabled: The 'is_enabled' column is a flag that indicates whether the organisation is currently
- active or disabled.
-
-Users Table: In addition to the 'organisations' table, the diagram illustrates an update to the
-existing 'users' table. This update includes the addition of an organisation_id column, which serves
-as a foreign key. This column establishes a link between users and their associated organisations,
-allowing for efficient organisation assignment and access control enforcement.
-
-This database design is crucial for the successful implementation of the feature that enables
-multiple organisations within the OnTrack system. It ensures data integrity and provides the
-necessary structure for managing and securing organisational data.
+- description: In the 'description' column, you can find detailed descriptions
+ of each organisation.
+- email: This column stores the official email address associated with the
+ organisation.
+- is_enabled: The 'is_enabled' column is a flag that indicates whether the
+ organisation is currently active or disabled.
+
+Users Table: In addition to the 'organisations' table, the diagram illustrates
+an update to the existing 'users' table. This update includes the addition of an
+organisation_id column, which serves as a foreign key. This column establishes a
+link between users and their associated organisations, allowing for efficient
+organisation assignment and access control enforcement.
+
+This database design is crucial for the successful implementation of the feature
+that enables multiple organisations within the OnTrack system. It ensures data
+integrity and provides the necessary structure for managing and securing
+organisational data.
## The relationships between these classes are as follows
-- organisations to users: This association indicates a multiplicity of one-to-many (1..\*). It means
- that each organisation in the 'organisations' table can be associated with zero to multiple users
- from the 'users' table.
-- users to organisations: This association specifies a multiplicity of one-to-one (1..1). It
- signifies that each user in the 'users' table is uniquely associated with one organisation in the
- 'organisations' table.
+- organisations to users: This association indicates a multiplicity of
+ one-to-many (1..\*). It means that each organisation in the 'organisations'
+ table can be associated with zero to multiple users from the 'users' table.
+- users to organisations: This association specifies a multiplicity of
+ one-to-one (1..1). It signifies that each user in the 'users' table is
+ uniquely associated with one organisation in the 'organisations' table.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Sidekiq Investigation/sidekiq-investigation.md b/src/content/docs/Products/OnTrack/Documentation/Sidekiq Investigation/sidekiq-investigation.md
index 7a163034d..6973167a6 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Sidekiq Investigation/sidekiq-investigation.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Sidekiq Investigation/sidekiq-investigation.md
@@ -4,63 +4,70 @@ title: "Spike Research: Integrating Sidekiq with Ruby on Rails"
## Objective
-The purpose of this spike is to explore the integration of Sidekiq for background job processing in
-our Ruby on Rails application, understand its setup and configuration, and identify any potential
-issues that may arise during its implementation and deployment.
+The purpose of this spike is to explore the integration of Sidekiq for
+background job processing in our Ruby on Rails application, understand its setup
+and configuration, and identify any potential issues that may arise during its
+implementation and deployment.
## Introduction
-Background job processing is an essential component of modern web applications, allowing
-long-running tasks to be handled asynchronously to improve user experience and system performance.
-Sidekiq is a Ruby background job processor that uses threads to handle many jobs at the same time in
-the same process.
+Background job processing is an essential component of modern web applications,
+allowing long-running tasks to be handled asynchronously to improve user
+experience and system performance. Sidekiq is a Ruby background job processor
+that uses threads to handle many jobs at the same time in the same process.
## Methodology
-Research for this spike was conducted by reviewing Sidekiq's official documentation, community
-forums, and GitHub issues. Additionally, a prototype was created in a development environment to
-test the integration points and monitor the behaviour of Sidekiq within the context of our
-application.
+Research for this spike was conducted by reviewing Sidekiq's official
+documentation, community forums, and GitHub issues. Additionally, a prototype
+was created in a development environment to test the integration points and
+monitor the behaviour of Sidekiq within the context of our application.
## Findings
1. Installation and Configuration
- Sidekiq is easily installable as a gem in Ruby on Rails.
-- Configuration is straightforward, with the need to set up a sidekiq.yml file and initialize the
- Redis server, which Sidekiq uses for job storage.
-- Sidekiq's dashboard provides a web interface to monitor job queues, which can be mounted within
- Rails routes.
+- Configuration is straightforward, with the need to set up a sidekiq.yml file
+ and initialize the Redis server, which Sidekiq uses for job storage.
+- Sidekiq's dashboard provides a web interface to monitor job queues, which can
+ be mounted within Rails routes.
2. Operational Insights
- Sidekiq requires Redis to be available and properly configured.
-- Memory usage is manageable, but careful monitoring is required to prevent leaks over time.
-- Concurrency settings and job prioritization are critical for optimal performance.
+- Memory usage is manageable, but careful monitoring is required to prevent
+ leaks over time.
+- Concurrency settings and job prioritization are critical for optimal
+ performance.
3. Deployment Considerations
- Deployment to platforms like Heroku requires additional add-ons for Redis.
-- Environment variables need to be managed securely, especially for the Redis URL.
+- Environment variables need to be managed securely, especially for the Redis
+ URL.
- Sidekiq can be scaled independently by increasing worker dynos.
4. Best Practices
-- Regularly update the Sidekiq gem to benefit from the latest improvements and security patches.
+- Regularly update the Sidekiq gem to benefit from the latest improvements and
+ security patches.
- Ensure idempotency of jobs to avoid duplicating work in case of retries.
-- Monitor Sidekiq with tools like New Relic or Sentry to track failures and performance issues.
+- Monitor Sidekiq with tools like New Relic or Sentry to track failures and
+ performance issues.
## Challenges and Solutions
1. - Challenge: Ensuring jobs are retried correctly after failures.
- - Solution: Implementing custom retry logic within jobs and leveraging Sidekiq's middleware for
- error handling.
-2. - Challenge: Handling large job volumes without overloading the Redis instance.
+ - Solution: Implementing custom retry logic within jobs and leveraging
+ Sidekiq's middleware for error handling.
+2. - Challenge: Handling large job volumes without overloading the Redis
+ instance.
- Solution: Scaling Redis and optimizing job size and complexity.
## Conclusion
-The integration of Sidekiq into our Ruby on Rails application appears to be a robust solution for
-our background processing needs. With its ease of use, extensive documentation, and active
-community, Sidekiq offers the features we require to improve our application's performance and
-reliability.
+The integration of Sidekiq into our Ruby on Rails application appears to be a
+robust solution for our background processing needs. With its ease of use,
+extensive documentation, and active community, Sidekiq offers the features we
+require to improve our application's performance and reliability.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Voice Verification/architecture-document.md b/src/content/docs/Products/OnTrack/Documentation/Voice Verification/architecture-document.md
index 9329f9509..085875e4b 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Voice Verification/architecture-document.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Voice Verification/architecture-document.md
@@ -6,32 +6,35 @@ title: Architecture Document | Voice Verification for OnTrack Delivery
### Purpose
-This document provides a comprehensive architectural overview of the Voice Verification system,
-using a few different architectural views to depict different aspects of the system. It is intended
-to capture and convey the significant architectural decisions which have been made on the system.
+This document provides a comprehensive architectural overview of the Voice
+Verification system, using a few different architectural views to depict
+different aspects of the system. It is intended to capture and convey the
+significant architectural decisions which have been made on the system.
### Scope
-This Architecture Document provides an architectural overview of Voice Verification System. The
-Voice Verification System is being developed to address the issues concerning contract cheating on
-online learning management platforms.
+This Architecture Document provides an architectural overview of Voice
+Verification System. The Voice Verification System is being developed to address
+the issues concerning contract cheating on online learning management platforms.
## Architectural Goals and Constraints
### Goals
-- Students can register their voice on OnTrack using the Speaker Verification system.
+- Students can register their voice on OnTrack using the Speaker Verification
+ system.
- Upon task submission, the attached voice file is analysed for verification.
- Deployed on the OnTrack instance in a docker container format.
-- Support for gathering both front and backend telemetry should be present in the system to allow
- for analysis of user interaction, and system performance.
+- Support for gathering both front and backend telemetry should be present in
+ the system to allow for analysis of user interaction, and system performance.
### Constraints
-- Speaker Verification system must have compatibility for voice recording across multiple browsers.
+- Speaker Verification system must have compatibility for voice recording across
+ multiple browsers.
- Front-end components should comply with existing OnTrack requirements.
-- System should adhere to existing OnTrack privacy/compliance requirements in addition to existing
- OnTrack security requirements.
+- System should adhere to existing OnTrack privacy/compliance requirements in
+ addition to existing OnTrack security requirements.
## Use-Case View
@@ -39,22 +42,26 @@ online learning management platforms.

-1. As a student, I want Ontrack to have a function that can identifies me by my voice.
- **Description:** The feature highlighted through this user story is having a "Enrol the
- voiceprint". This feature allows a student to register a voiceprint for later verification
-
-2. As a student submitting my assignments, I want able to upload audio files to Ontrack.
- **Description:** The feature highlighted through this user story is having a "Submit a voice
- file”. This feature allows a student to submit an assignment audio to Ontrack System.
-3. As a Deep Speaker Classifier, “I” can recognise student by their voice at a confidence level.
- **Description:** The Deep Speaker Model is an actor involved within “Compare two audio samples”
- which will automatically confirm student’s identity by comparing their new voice submission to
- their voiceprint. This takes place within the Voice Verification Container.
-
-4. As a tutor/student, I want to receive the result of voice verification to be aware of the outcome
- of the verification. **Description:** Voice Verification system will return/export the voice
- verification result to the Tutor and Student (a confidence score of how likely it is that the
- voice in the recording is the student in question) in a readable way.
+1. As a student, I want Ontrack to have a function that can identifies me by my
+ voice. **Description:** The feature highlighted through this user story is
+ having a "Enrol the voiceprint". This feature allows a student to register a
+ voiceprint for later verification
+
+2. As a student submitting my assignments, I want able to upload audio files to
+ Ontrack. **Description:** The feature highlighted through this user story is
+ having a "Submit a voice file”. This feature allows a student to submit an
+ assignment audio to Ontrack System.
+3. As a Deep Speaker Classifier, “I” can recognise student by their voice at a
+ confidence level. **Description:** The Deep Speaker Model is an actor
+ involved within “Compare two audio samples” which will automatically confirm
+ student’s identity by comparing their new voice submission to their
+ voiceprint. This takes place within the Voice Verification Container.
+
+4. As a tutor/student, I want to receive the result of voice verification to be
+ aware of the outcome of the verification. **Description:** Voice Verification
+ system will return/export the voice verification result to the Tutor and
+ Student (a confidence score of how likely it is that the voice in the
+ recording is the student in question) in a readable way.
## Logical View
@@ -64,23 +71,25 @@ online learning management platforms.
### Detailed description of the architecture diagram
-The diagram shows the communication types between each of the systems of the project. The User
-interacts with both the frontend website OnTrack and the voice verification system through a Ruby
-app. The voice verification method used takes advantage of Deep Speaker. Deep Speaker is a deep
-learning model that can be used to verify a user's identity by comparing their voice to a
-voiceprint. The voice verification system is deployed in a docker container format.
+The diagram shows the communication types between each of the systems of the
+project. The User interacts with both the frontend website OnTrack and the voice
+verification system through a Ruby app. The voice verification method used takes
+advantage of Deep Speaker. Deep Speaker is a deep learning model that can be
+used to verify a user's identity by comparing their voice to a voiceprint. The
+voice verification system is deployed in a docker container format.
### General Flow diagram

-The User has its requests go through the existing OnTrack system, with the OnTrack system sending
-further requests to the Voice Verification API. The sends the voice files to the docker container.
+The User has its requests go through the existing OnTrack system, with the
+OnTrack system sending further requests to the Voice Verification API. The sends
+the voice files to the docker container.
## Size and Performance
-The Size and Performance as of this stage cannot be calculated. However, the following information
-should be recorded when the system has been developed:
+The Size and Performance as of this stage cannot be calculated. However, the
+following information should be recorded when the system has been developed:
- Size of Voice Files for enrolment and verification.
- Response time for API calls
@@ -88,8 +97,9 @@ should be recorded when the system has been developed:
## Quality
-The Quality of the system must be further measured. The required information is as follows:
+The Quality of the system must be further measured. The required information is
+as follows:
- Quality of Voice Validation results
-- Testing of Voice Submissions (placing multiple speakers in the audio file, placing the speech at
- different stage of the audio file)
+- Testing of Voice Submissions (placing multiple speakers in the audio file,
+ placing the speech at different stage of the audio file)
diff --git a/src/content/docs/Products/OnTrack/Documentation/Voice Verification/audio-system-interface-design.md b/src/content/docs/Products/OnTrack/Documentation/Voice Verification/audio-system-interface-design.md
index 13a8d7199..8a77c51a4 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Voice Verification/audio-system-interface-design.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Voice Verification/audio-system-interface-design.md
@@ -8,7 +8,8 @@ title: Audio System Interface Design Document
- Author: [agahis](https://github.com/agahis)
- Team: OnTrack – Voice Verification
-- Team (Delivery and/or Product) Lead: [Shae Christmas](https://github.com/ShaeChristmas)
+- Team (Delivery and/or Product) Lead:
+ [Shae Christmas](https://github.com/ShaeChristmas)
## Document Summary
@@ -16,11 +17,12 @@ title: Audio System Interface Design Document
- Documentation Title: Audio System Interface Design Document
- Documentation Type: Technical
-- Documentation Information Summary: Design document detailing the implementation of the OnTrack
- Voice Verification audio system interface, showcased by wire frames for frontend development.
- Interface to allow tutors to see the results from the voice verification test with the new OnTrack
- Overseer system as well. The perspective is from a tutors as they are the only ones who have
- access to it.
+- Documentation Information Summary: Design document detailing the
+ implementation of the OnTrack Voice Verification audio system interface,
+ showcased by wire frames for frontend development. Interface to allow tutors
+ to see the results from the voice verification test with the new OnTrack
+ Overseer system as well. The perspective is from a tutors as they are the only
+ ones who have access to it.
## Document Review Information
@@ -48,21 +50,23 @@ title: Audio System Interface Design Document
---
-See [Thoth Tech Handbook](https://github.com/thoth-tech/handbook/blob/main/README.md).
+See
+[Thoth Tech Handbook](https://github.com/thoth-tech/handbook/blob/main/README.md).
## Low Fidelity Designs
---
-**Figure 1** below shows the initial sketches and brainstorming put into place for the tutors
-interface when accessing the voice verification results.
+**Figure 1** below shows the initial sketches and brainstorming put into place
+for the tutors interface when accessing the voice verification results.

-**Figure 2** below shows a digital draft design for the flowchart between the three different
-results and visualisations shown in Figure 1. The visualisations are shown for the results and
-similar conventions used for display. Branching off to the main processes that would subsequently
-become an output from clicking on these results.
+**Figure 2** below shows a digital draft design for the flowchart between the
+three different results and visualisations shown in Figure 1. The visualisations
+are shown for the results and similar conventions used for display. Branching
+off to the main processes that would subsequently become an output from clicking
+on these results.

@@ -70,22 +74,22 @@ become an output from clicking on these results.
---
-**Figures 3 and 4** below shows the process and results when the tutor clicks on an audio file.
-These figures show files that are still pending.
+**Figures 3 and 4** below shows the process and results when the tutor clicks on
+an audio file. These figures show files that are still pending.


-**Figures 5 and 6** below shows the process and results of when the tutor clicks on an verification
-pending audio file.
+**Figures 5 and 6** below shows the process and results of when the tutor clicks
+on an verification pending audio file.


-**Figures 7 and 8** shows the process and results of when the tutor clicks on an audio file that has
-completed the Verification process.
+**Figures 7 and 8** shows the process and results of when the tutor clicks on an
+audio file that has completed the Verification process.

diff --git a/src/content/docs/Products/OnTrack/Documentation/Voice Verification/voice-verification-design-document.md b/src/content/docs/Products/OnTrack/Documentation/Voice Verification/voice-verification-design-document.md
index 2dcb85663..210caab37 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Voice Verification/voice-verification-design-document.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Voice Verification/voice-verification-design-document.md
@@ -16,8 +16,8 @@ title: OnTrack Voice Verification Design Document
- Documentation Title: Voice Verification Design Document
- Documentation Type: Technical
-- Documentation Information Summary: Design Document detailing implementation of Voice Verification
- system in the OnTrack Project
+- Documentation Information Summary: Design Document detailing implementation of
+ Voice Verification system in the OnTrack Project
## Document Review Information
@@ -34,10 +34,11 @@ title: OnTrack Voice Verification Design Document
CLI: Command Line Interface; Interacting with something through the terminal
-Docker Container: A small program contained inside a virtual machine. The containerisation program
-used is called Docker.
+Docker Container: A small program contained inside a virtual machine. The
+containerisation program used is called Docker.
-RabbitMQ: A message broker that allows for the communication between different programs.
+RabbitMQ: A message broker that allows for the communication between different
+programs.
## Key Links/Resources
@@ -53,7 +54,8 @@ RabbitMQ: A message broker that allows for the communication between different p
---
-See [Thoth Tech Handbook](https://github.com/thoth-tech/handbook/blob/main/README.md).
+See
+[Thoth Tech Handbook](https://github.com/thoth-tech/handbook/blob/main/README.md).
## Related Documents
@@ -65,69 +67,75 @@ See [Thoth Tech Handbook](https://github.com/thoth-tech/handbook/blob/main/READM
---
-OnTrack as a platform allows for students to track assessments for enrolled subjects, and submit
-their work once completed. Audio submissions have been a substitute for in-person discussions in
-recent years.
+OnTrack as a platform allows for students to track assessments for enrolled
+subjects, and submit their work once completed. Audio submissions have been a
+substitute for in-person discussions in recent years.
-The OnTrack Voice Verification system aims to verify audio submissions, to ensure that the speaker
-in the submission is the correct student.
+The OnTrack Voice Verification system aims to verify audio submissions, to
+ensure that the speaker in the submission is the correct student.
-This system would be implemented inside the existing OnTrack Project, and integrated into OnTrack by
-using the pre-existing audio submission system.
+This system would be implemented inside the existing OnTrack Project, and
+integrated into OnTrack by using the pre-existing audio submission system.
## Problem Statement
---
-When submitting an assessment with an oral component, the student may take advantage of the OnTrack
-audio submissions system.
+When submitting an assessment with an oral component, the student may take
+advantage of the OnTrack audio submissions system.
-However, any audio file may be submitted through this system; it is not verified at any stage in the
-current OnTrack implementation. Contract cheating or other methods of cheating could be used, and
-would not be picked up by the system automatically.
+However, any audio file may be submitted through this system; it is not verified
+at any stage in the current OnTrack implementation. Contract cheating or other
+methods of cheating could be used, and would not be picked up by the system
+automatically.
-A possible method to cheat by taking advantage of the pre-existing system would be to pay someone
-else to answer audio questions. As no verification process is taking place, tutors may not identify
-that the person speaking is not the student who is being assessed.
+A possible method to cheat by taking advantage of the pre-existing system would
+be to pay someone else to answer audio questions. As no verification process is
+taking place, tutors may not identify that the person speaking is not the
+student who is being assessed.
-A verification system for testing audio submissions against a baseline audio sample would make this
-type of cheating more difficult.
+A verification system for testing audio submissions against a baseline audio
+sample would make this type of cheating more difficult.
-The voice verification system would give a confidence in the speakers identity, which could then be
-verified by an assessor if necessary.
+The voice verification system would give a confidence in the speakers identity,
+which could then be verified by an assessor if necessary.
-As such, this allows for greater verification of submissions, and ensuring that cheating using the
-audio submission system can be minimised.
+As such, this allows for greater verification of submissions, and ensuring that
+cheating using the audio submission system can be minimised.
## Current Works
---
-The current voice verification system is not linked to the OnTrack architecture. Instead, the system
-is implemented as a Docker Container, that can accept audio inputs, and produces a confidence
-variable with certainty of the speakers identity.
+The current voice verification system is not linked to the OnTrack architecture.
+Instead, the system is implemented as a Docker Container, that can accept audio
+inputs, and produces a confidence variable with certainty of the speakers
+identity.
-At this stage, the system receives a known sample, and a new audio file. These must be manually
-submitted to the container through the CLI.
+At this stage, the system receives a known sample, and a new audio file. These
+must be manually submitted to the container through the CLI.
-As such, a system to link the existing Docker container to the OnTrack system must be implemented
-for automatic verification and display of results.
+As such, a system to link the existing Docker container to the OnTrack system
+must be implemented for automatic verification and display of results.
## Design
---
-The Voice Verification Architecture uses similar to a system in place within OnTrack called OnTrack
-Overseer.
+The Voice Verification Architecture uses similar to a system in place within
+OnTrack called OnTrack Overseer.
-When an audio file is received in the database, a trigger is sent to the Message Queue system that
-the Voice Verification architecture employs. This system uses RabbitMQ as a message queue, to send
-files to be verified to the main Voice Verification container. This container uses Deep Speaker
-verification to test the new file against the baseline file collected for that student. Then, the
-confidence value appended to the message on the message queue, and saved in the database.
+When an audio file is received in the database, a trigger is sent to the Message
+Queue system that the Voice Verification architecture employs. This system uses
+RabbitMQ as a message queue, to send files to be verified to the main Voice
+Verification container. This container uses Deep Speaker verification to test
+the new file against the baseline file collected for that student. Then, the
+confidence value appended to the message on the message queue, and saved in the
+database.
-After the confidence value is saved in the database alongside the file, this can be retrieved by the
-system. This retrieval takes place when the file is requested for marking.
+After the confidence value is saved in the database alongside the file, this can
+be retrieved by the system. This retrieval takes place when the file is
+requested for marking.
### Architecture
@@ -135,9 +143,9 @@ system. This retrieval takes place when the file is requested for marking.
### Data Formats
-The Voice Verification system uses similar data formats to the OnTrack system. The audio files are
-stored in an SQLite database, attached to the OnTrack API. In the database, three new values are
-appended to audio submissions:
+The Voice Verification system uses similar data formats to the OnTrack system.
+The audio files are stored in an SQLite database, attached to the OnTrack API.
+In the database, three new values are appended to audio submissions:
| Database Tag | Purpose | Possible Values | Example |
| ------------ | ---------------------------------------------------------------------------------------- | --------------- | ------- |
@@ -149,23 +157,24 @@ These values are appended to the existing documents in the SQLite Database.
### Data Flow
-The messages in the Voice Verification Message Queue should follow the same structure as the OnTrack
-Overseer Message Queue. Requests to the database have the following parameters:
+The messages in the Voice Verification Message Queue should follow the same
+structure as the OnTrack Overseer Message Queue. Requests to the database have
+the following parameters:
- `task_id`: task associated with the submission
- `submission`: path to the submission zip file or folder
-- `overseer_assessment_id`: id of the overseer message. used to keep track of individual
- assessments.
+- `overseer_assessment_id`: id of the overseer message. used to keep track of
+ individual assessments.
-Messages to the Voice Verification system also contain a `baseline` parameter, which is the file
-path to the baseline audio sample for that student.
+Messages to the Voice Verification system also contain a `baseline` parameter,
+which is the file path to the baseline audio sample for that student.
Messages from the Voice Verification system have the following parameters:
- `task_id`: task associated with the submission
- `submission`: path to the submission zip file or folder
-- `overseer_assessment_id`: id of the overseer message. used to keep track of individual
- assessments.
+- `overseer_assessment_id`: id of the overseer message. used to keep track of
+ individual assessments.
- `confidence`: confidence value returned from the verification system
- `verification time`: when the verification was completed.
@@ -173,9 +182,10 @@ These values are then appended to the existing documents in the SQLite Database.
### User Interaction
-Ideally, students wont no interaction with the verification system. Once an audio file has been
-submitted, it is automatically be queued for verification. Once verified, the assessor can listen to
-the audio submission, and view the confidence value.
+Ideally, students wont no interaction with the verification system. Once an
+audio file has been submitted, it is automatically be queued for verification.
+Once verified, the assessor can listen to the audio submission, and view the
+confidence value.
### Testing
@@ -186,16 +196,18 @@ Testing for the implemented system would must include the following strategies:
- Verification of files with multiple different speakers.
- Verification of files with no speakers.
-Additionally, other methods of bypassing the system should be investigated. This would include
-testing database security; more specifically where the validation results are stored.
+Additionally, other methods of bypassing the system should be investigated. This
+would include testing database security; more specifically where the validation
+results are stored.
-Finally, testing different values for confidence thresholds would allow for more refined use of the
-voice verification system.
+Finally, testing different values for confidence thresholds would allow for more
+refined use of the voice verification system.
## Success metrics
---
-To measure the success of the system, a Confusion Matrix should be generated to determine the false
-positive and false negative rate of the system. As results would be validated by an assessor, this
-information can be tracked per assessor, and collated for review.
+To measure the success of the system, a Confusion Matrix should be generated to
+determine the false positive and false negative rate of the system. As results
+would be validated by an assessor, this information can be tracked per assessor,
+and collated for review.
diff --git a/src/content/docs/Products/OnTrack/Documentation/Voice Verification/voice-verification-srs-document.md b/src/content/docs/Products/OnTrack/Documentation/Voice Verification/voice-verification-srs-document.md
index 5b21046b2..c8d736469 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Voice Verification/voice-verification-srs-document.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Voice Verification/voice-verification-srs-document.md
@@ -6,41 +6,44 @@ title: Software Requirements Specifications Document
## Product Purpose
-The purpose of the Voice Verification System for OnTrack is to add the ability for Voice Samples
-that are submitted to OnTrack to undergo a verification process to ensure that the speaker in the
-sample is the same as the person taking part in the unit. Specifically, this is to identify when a
-student is contract cheating, or if the person in the specific submission is also the person
-undertaking the unit.
-
-The intended audience for this project is the users of OnTrack; both students for using the system
-to submit and verify their own audio files, as well as Tutors, who would be able to see the results
-of the verification and verify that the student has undertaken the task themselves.
-
-The systems intended use is for the verification of Audio files that are submitted as part of Deakin
-assessments to OnTrack, to further verify that the student has done the work themselves and is not
-taking part in cheating; more specifically, to verify that the student has not hired someone else to
-do the task for them, as is the case with Contract Cheating.
-
-The ccope of the project is to verify and validate a Python Container that can compare two voice
-samples and give the confidence level that the person speaking is the same in both voice samples.
-This requires a deployment to a testing system, as well as deployment to the OnTrack staging
-platform for Thoth Tech.
+The purpose of the Voice Verification System for OnTrack is to add the ability
+for Voice Samples that are submitted to OnTrack to undergo a verification
+process to ensure that the speaker in the sample is the same as the person
+taking part in the unit. Specifically, this is to identify when a student is
+contract cheating, or if the person in the specific submission is also the
+person undertaking the unit.
+
+The intended audience for this project is the users of OnTrack; both students
+for using the system to submit and verify their own audio files, as well as
+Tutors, who would be able to see the results of the verification and verify that
+the student has undertaken the task themselves.
+
+The systems intended use is for the verification of Audio files that are
+submitted as part of Deakin assessments to OnTrack, to further verify that the
+student has done the work themselves and is not taking part in cheating; more
+specifically, to verify that the student has not hired someone else to do the
+task for them, as is the case with Contract Cheating.
+
+The ccope of the project is to verify and validate a Python Container that can
+compare two voice samples and give the confidence level that the person speaking
+is the same in both voice samples. This requires a deployment to a testing
+system, as well as deployment to the OnTrack staging platform for Thoth Tech.
## Description of overall System
## User requirements
-The user requirements of the system are that the system needs to be usable by both Students and
-Tutors. These requirements include:
+The user requirements of the system are that the system needs to be usable by
+both Students and Tutors. These requirements include:
- Ability to submit voice files for Enrolment and Verification
-- Attainment of results for Students and Tutors to show the validity of the voice file in the
- context of the assessment.
+- Attainment of results for Students and Tutors to show the validity of the
+ voice file in the context of the assessment.
- Ease of use
- Secure system
-These requirements are mainly focused on the user experience, and how the user will interact with
-the system.
+These requirements are mainly focused on the user experience, and how the user
+will interact with the system.
## Assumptions and Dependencies
@@ -52,19 +55,23 @@ This system has a few assumptions. These include:
- Tutors use the system whenever a voice submission is required.
- The same person is speaking throughout the entirety of the voice files.
-Each of these assumptions is important for the use and requirements of the system. The system should
-be able to deal with multiple requests in quick succession, be actively deployed to the OnTrack
-System, and have strict requirements for the initial voice file.
+Each of these assumptions is important for the use and requirements of the
+system. The system should be able to deal with multiple requests in quick
+succession, be actively deployed to the OnTrack System, and have strict
+requirements for the initial voice file.
-Furthermore, a few different aspects are relied upon for the project to function. These include:
+Furthermore, a few different aspects are relied upon for the project to
+function. These include:
- OnTrack as a deployment platform
-- Deployment of the full connected system (OnTrack, plus API, and the Python Container)
+- Deployment of the full connected system (OnTrack, plus API, and the Python
+ Container)
-These assumptions are that OnTrack is used as the deployment platform for the voice verification
-system, mainly as this is where it is being more properly integrated and developed for.
-Additionally, for OnTrack to function correctly, the full system (Frontend and API) needs to be
-deployed and using the Python Container effectively.
+These assumptions are that OnTrack is used as the deployment platform for the
+voice verification system, mainly as this is where it is being more properly
+integrated and developed for. Additionally, for OnTrack to function correctly,
+the full system (Frontend and API) needs to be deployed and using the Python
+Container effectively.
## System Requirements
@@ -72,20 +79,23 @@ deployed and using the Python Container effectively.
The functional requirements of the system are as follows:
-- The system should be able to accept an enrolment voice file for later comparison.
-- The system should be able to accept a new voice file to validate against the enrolment file.
-- The system should return readable results to the users (Both Student and Tutor).
+- The system should be able to accept an enrolment voice file for later
+ comparison.
+- The system should be able to accept a new voice file to validate against the
+ enrolment file.
+- The system should return readable results to the users (Both Student and
+ Tutor).
## Interface Requirements
-The interface for the system will be entirely within the OnTrack platform. As such, it will have the
-following requirements:
+The interface for the system will be entirely within the OnTrack platform. As
+such, it will have the following requirements:
-- The system's interface should be following the same format and design as other sections of the
- OnTrack Platform.
+- The system's interface should be following the same format and design as
+ other sections of the OnTrack Platform.
- The system should be easy to use for both Tutors and Students.
-- The system should return results in a readable way and be clear about the results of the
- verification.
+- The system should return results in a readable way and be clear about the
+ results of the verification.
## Hardware Interfaces
@@ -100,29 +110,33 @@ A basic internet connection is required to view the site.
The speaker verification system includes the following components:
- A Python library for audio file validation (Python 3.8)
-- Speaker Verification API: contain the backend RESTful API implemented in Django and Python
-- Doubtfire and Speaker Verification Integration: Ruby app that integrates the Speaker Verification
- API with OnTrack (Doubtfire LMS) via RabbitMQ queue
+- Speaker Verification API: contain the backend RESTful API implemented in
+ Django and Python
+- Doubtfire and Speaker Verification Integration: Ruby app that integrates the
+ Speaker Verification API with OnTrack (Doubtfire LMS) via RabbitMQ queue
- Docker-compose: contain the most likely setup for development setups
## System Features
-The system mainly focuses on the verification of voice files. As such, the features of the system
-are as follows:
+The system mainly focuses on the verification of voice files. As such, the
+features of the system are as follows:
- The system will accept voice files for the enrolment of a student in a Unit.
-- The system can accept new voice files to verify that the same student is speaking in both files.
-- The system will compare two voice files and produce a confidence rating, outlining how confident
- it is that the speaker is the same in both voice files.
-- The system will return the results to the Tutor and Student to ensure that the users are aware of
- the outcome of the verification.
+- The system can accept new voice files to verify that the same student is
+ speaking in both files.
+- The system will compare two voice files and produce a confidence rating,
+ outlining how confident it is that the speaker is the same in both voice
+ files.
+- The system will return the results to the Tutor and Student to ensure that the
+ users are aware of the outcome of the verification.
## Non-functional requirements
-The non-functional requirements of the system largely revolve around the data storage and security
-of the system. These include:
+The non-functional requirements of the system largely revolve around the data
+storage and security of the system. These include:
-- The system will only keep track of Enrolment files, for later verification use.
+- The system will only keep track of Enrolment files, for later verification
+ use.
- The system will not store voice files for submission and verification.
- The system will be secure.
- The system should be easy to use.
@@ -130,17 +144,18 @@ of the system. These include:
## Definitions, Acronyms, Abbreviations
-- Docker: a simple container that specify a complete package of components needed to run your
- software, or an application build and deployment tool
-- RabbitMQ: a message-queueing software also known as a message broker or queue manager. It is
- software where queues are defined, to which applications connect in order to transfer a message or
- messages.
-- Python Library: a library is a collection of pre-written code in Python language that provide
- further access to system
+- Docker: a simple container that specify a complete package of components
+ needed to run your software, or an application build and deployment tool
+- RabbitMQ: a message-queueing software also known as a message broker or queue
+ manager. It is software where queues are defined, to which applications
+ connect in order to transfer a message or messages.
+- Python Library: a library is a collection of pre-written code in Python
+ language that provide further access to system
- Ruby On Rails: a server-side web application framework written in Ruby.
-- Backend: development that happens behind the scenes, it is all the parts of a computer system or
- application that is not directly accessed by the user, it is responsible for storing and
- manipulating data through code (language use: Python, Ruby)
-- Frontend: development on what the user can see and/or directly interact with (language uses:
- Angular JS and TypeScript)
+- Backend: development that happens behind the scenes, it is all the parts of a
+ computer system or application that is not directly accessed by the user, it
+ is responsible for storing and manipulating data through code (language use:
+ Python, Ruby)
+- Frontend: development on what the user can see and/or directly interact with
+ (language uses: Angular JS and TypeScript)
- API: Application Programming Interface
diff --git a/src/content/docs/Products/OnTrack/Documentation/Voice Verification/voice-verification-user-design-document.md b/src/content/docs/Products/OnTrack/Documentation/Voice Verification/voice-verification-user-design-document.md
index 1addd6d69..50ab93bf6 100644
--- a/src/content/docs/Products/OnTrack/Documentation/Voice Verification/voice-verification-user-design-document.md
+++ b/src/content/docs/Products/OnTrack/Documentation/Voice Verification/voice-verification-user-design-document.md
@@ -16,8 +16,8 @@ title: OnTrack Voice Verification User Document
- Documentation Title: Voice Verification User Document
- Documentation Type: Documentation
-- Documentation Information Summary: User Design Document detailing guide on Enrolment - How
- Students can register their voice to Voice Verification System
+- Documentation Information Summary: User Design Document detailing guide on
+ Enrolment - How Students can register their voice to Voice Verification System
## Document Review Information
@@ -32,8 +32,8 @@ title: OnTrack Voice Verification User Document
---
-A voiceprint is another way to use your unique features to identify who you are, similar to a
-fingerprint.
+A voiceprint is another way to use your unique features to identify who you are,
+similar to a fingerprint.
## Key Links/Resources
@@ -49,7 +49,8 @@ fingerprint.
---
-See [Thoth Tech Handbook](https://github.com/thoth-tech/handbook/blob/main/README.md).
+See
+[Thoth Tech Handbook](https://github.com/thoth-tech/handbook/blob/main/README.md).
## Related Documents
@@ -62,25 +63,29 @@ See [Thoth Tech Handbook](https://github.com/thoth-tech/handbook/blob/main/READM
---
-The OnTrack Voice Verification system allows student to enrol their voice and use their voice print
-to verify their identity when discussing or submitting works
+The OnTrack Voice Verification system allows student to enrol their voice and
+use their voice print to verify their identity when discussing or submitting
+works
-This system would be implemented inside the existing OnTrack Project, and integrated into OnTrack by
-using the pre-existing audio submission system.
+This system would be implemented inside the existing OnTrack Project, and
+integrated into OnTrack by using the pre-existing audio submission system.
Voice Verification system has two phases:
-- Enrolment - Student's voice is recorded and specific voice features are extracted into a voice
- print.
+- Enrolment - Student's voice is recorded and specific voice features are
+ extracted into a voice print.
-- Verification - Student's audio submission is compared against a previously created voice print.
+- Verification - Student's audio submission is compared against a previously
+ created voice print.
## Main Process

-- Verified: The audio file passed a certain confidence value and concluded as same person
-- Unverified:The audio file is under a confidence value range and concluded as not a same person
+- Verified: The audio file passed a certain confidence value and concluded as
+ same person
+- Unverified:The audio file is under a confidence value range and concluded as
+ not a same person
- Pending: The audio is pending/in awaiting queue for comparing
## Constraints
@@ -92,7 +97,8 @@ Voice Verification system has two phases:
3. Speaking language: English
4. The voice must be between three seconds and one minute
5. The volumes must not exceed 5 MB
-6. Supported file types: .wav, mp3, m4a, .flac (now the voice system only accepts .flac type files)
+6. Supported file types: .wav, mp3, m4a, .flac (now the voice system only
+ accepts .flac type files)
Tips: Speak at a normal cadence and clearly.
diff --git a/src/content/docs/Products/OnTrack/Documentation/review-ontrack-github.md b/src/content/docs/Products/OnTrack/Documentation/review-ontrack-github.md
index 97e23f53b..667c55c84 100644
--- a/src/content/docs/Products/OnTrack/Documentation/review-ontrack-github.md
+++ b/src/content/docs/Products/OnTrack/Documentation/review-ontrack-github.md
@@ -2,8 +2,8 @@
title: Review of OnTrack folder in GitHub Document
---
-This document reviews the folders within OnTrack repository and determines which folders fall under
-what categories.
+This document reviews the folders within OnTrack repository and determines which
+folders fall under what categories.
| **Projects** | **Documentation** |
| ---------------------------------------------- | --------------------------------------------------------------------- |
diff --git a/src/content/docs/Products/OnTrack/Issues and Resolutions/doubtfire-in-codespaces.md b/src/content/docs/Products/OnTrack/Issues and Resolutions/doubtfire-in-codespaces.md
index ea03e45b8..37544a672 100644
--- a/src/content/docs/Products/OnTrack/Issues and Resolutions/doubtfire-in-codespaces.md
+++ b/src/content/docs/Products/OnTrack/Issues and Resolutions/doubtfire-in-codespaces.md
@@ -12,13 +12,15 @@ title: Spike - Investigate running Dev container and code base in CodeSpaces
## Goals / Deliverables
-Creating a cloud-based development environment using GitHub Codespaces to run Ontrack is a valuable
-initiative to streamline the setup process for students struggling with local development
-environments. To explore this, here's a step-by-step guide on setting up Ontrack in a Codespace.
-
-Codespaces offer a flexible, cloud-based development environment but might have limitations
-depending on the specific requirements of the application. Testing and validation are crucial to
-ensure it meets the needs of running Ontrack effectively.
+Creating a cloud-based development environment using GitHub Codespaces to run
+Ontrack is a valuable initiative to streamline the setup process for students
+struggling with local development environments. To explore this, here's a
+step-by-step guide on setting up Ontrack in a Codespace.
+
+Codespaces offer a flexible, cloud-based development environment but might have
+limitations depending on the specific requirements of the application. Testing
+and validation are crucial to ensure it meets the needs of running Ontrack
+effectively.
## Technologies, Tools, and Resources used
@@ -35,29 +37,30 @@ ensure it meets the needs of running Ontrack effectively.
## Tasks undertaken
1. Creating a Codespace
- - Sign in to GitHub and navigate to the repository containing Ontrack. (Make sure you fork the
- repository first from thoth-tech/dotfire-deploy, thoth-tech/doubtfire-web and
- thoth-tech/doubtfire-api)
- - Click on the "Code" button and select "Open with Codespaces" or navigate to "Code" > "New
- Codespace."
+ - Sign in to GitHub and navigate to the repository containing Ontrack. (Make
+ sure you fork the repository first from thoth-tech/dotfire-deploy,
+ thoth-tech/doubtfire-web and thoth-tech/doubtfire-api)
+ - Click on the "Code" button and select "Open with Codespaces" or navigate to
+ "Code" > "New Codespace."

2. Install Docker-in-Docker in Codespace
- - Confirm that Docker is installed and running in the Codespace by running the following command
- in the terminal: wihch docker
+ - Confirm that Docker is installed and running in the Codespace by running
+ the following command in the terminal: wihch docker

3. Configuring Codespace for Ontrack:
- - Codespaces use a configuration file called .devcontainer to define the development environment.
- Create a .devcontainer folder in the root of the Ontrack repository if it doesn’t exist.
- (Replace this folder with the existing .devcontainer folder in the repository)
+ - Codespaces use a configuration file called .devcontainer to define the
+ development environment. Create a .devcontainer folder in the root of the
+ Ontrack repository if it doesn’t exist. (Replace this folder with the
+ existing .devcontainer folder in the repository)
4. Running Ontrack in Codespace:
- - Codespaces will use the defined configuration to create a containerized environment. It will
- automatically install dependencies, clone the repository, and set up Ontrack based on the
- configuration provided.
+ - Codespaces will use the defined configuration to create a containerized
+ environment. It will automatically install dependencies, clone the
+ repository, and set up Ontrack based on the configuration provided.

@@ -67,9 +70,9 @@ ensure it meets the needs of running Ontrack effectively.
## What we found out
-The front-end seemed to run fine, but the back-end was not working as expected. The back-end server
-was not running, and the database was not connected. The following error was displayed in the
-terminal:
+The front-end seemed to run fine, but the back-end was not working as expected.
+The back-end server was not running, and the database was not connected. The
+following error was displayed in the terminal:
```shell
ERROR: ActionDispatch::HostAuthorization::DefaultResponseApp Blocked host:
@@ -83,57 +86,66 @@ ERROR: ActionDispatch::HostAuthorization::DefaultResponseApp Blocked host: You need to change “docker compose” of file run-full.sh in doubtfire-deploy/development
+ > You need to change “docker compose” of file run-full.sh in
+ > doubtfire-deploy/development
2. doubtfire-web doesn’t compile successfully:
- Open terminal
@@ -32,6 +33,7 @@ title: Troubleshooting Doubtfire Setup
## 4. Give Up
-Still cannot deploy it? Maybe it’s time to give up, you can just use Burp Suite and pentest online
-on my VPS: **IMPORTANT**: don’t scan with BurpSuite you guys won’t find
-anything anyway because of the anchor tag.
+Still cannot deploy it? Maybe it’s time to give up, you can just use Burp Suite
+and pentest online on my VPS: **IMPORTANT**: don’t
+scan with BurpSuite you guys won’t find anything anyway because of the anchor
+tag.
diff --git a/src/content/docs/Products/OnTrack/Ontrack Setup/How to Run OnTrack with Ubuntu.md b/src/content/docs/Products/OnTrack/Ontrack Setup/How to Run OnTrack with Ubuntu.md
index cc4b7d0c9..4b46a1978 100644
--- a/src/content/docs/Products/OnTrack/Ontrack Setup/How to Run OnTrack with Ubuntu.md
+++ b/src/content/docs/Products/OnTrack/Ontrack Setup/How to Run OnTrack with Ubuntu.md
@@ -18,14 +18,16 @@ folder.
## 1. Download Ubuntu and Rufus
-- Download [Ubuntu](https://ubuntu.com/download/desktop) from the official website.
+- Download [Ubuntu](https://ubuntu.com/download/desktop) from the official
+ website.
- Download [Rufus](https://rufus.ie/en/).
## 2. Create a Bootable USB Drive
1. Open Rufus.
2. Select the USB drive from the 'Device' dropdown.
-3. Click the 'SELECT' button to choose the Ubuntu ISO (should be in your downloads folder).
+3. Click the 'SELECT' button to choose the Ubuntu ISO (should be in your
+ downloads folder).
4. Click the 'START' button at the bottom.

@@ -37,14 +39,14 @@ folder.

-3. There should be a list of boot options including Windows Boot Manager. Select the bootable USB
- with the Ubuntu ISO.
+3. There should be a list of boot options including Windows Boot Manager. Select
+ the bootable USB with the Ubuntu ISO.
4. The Ubuntu OS will load in portable mode from the USB.
## 4. Install Ubuntu on External SSD
-1. After booting into Ubuntu you should be provided with the option to Try Ubuntu or Install Ubuntu.
- Select Install Ubuntu.
+1. After booting into Ubuntu you should be provided with the option to Try
+ Ubuntu or Install Ubuntu. Select Install Ubuntu.

@@ -52,7 +54,8 @@ folder.

-3. Proceed through the initial steps until you reach the "Installation type" step.
+3. Proceed through the initial steps until you reach the "Installation type"
+ step.
4. Choose the "Something else" option to manually configure partitions.
5. Identify the external SSD as a device like `/dev/sdb`.
@@ -84,8 +87,8 @@ folder.
cd Downloads
```
-5. Enter the following command in the terminal to connect to the wifi network and follow the prompts
- to enter your Deakin username and password.
+5. Enter the following command in the terminal to connect to the wifi network
+ and follow the prompts to enter your Deakin username and password.
```shell
sh SecureW2_JoinNow.run
@@ -116,7 +119,8 @@ folder.
## 7. Clone OnTrack Repository
-1. Clone the OnTrack repository (change `YOUR_USERNAME` to your GitHub username):
+1. Clone the OnTrack repository (change `YOUR_USERNAME` to your GitHub
+ username):
```shell
git clone --recurse-submodules git clone https://github.com/YOUR_USERNAME/doubtfire-deploy
@@ -129,7 +133,8 @@ folder.
code .
```
-3. Run change remotes script in the integrated terminal to change the remote to your own repository.
+3. Run change remotes script in the integrated terminal to change the remote to
+ your own repository.
```shell
./change-remotes.sh
@@ -139,6 +144,6 @@ folder.
## 8. Run OnTrack
-1. After re-opening vscode, the script should automatically run and open the OnTrack application in
- your browser.
+1. After re-opening vscode, the script should automatically run and open the
+ OnTrack application in your browser.
2. Happy coding!
diff --git a/src/content/docs/Products/OnTrack/Ontrack Setup/TutorialSetupT3.mdx b/src/content/docs/Products/OnTrack/Ontrack Setup/TutorialSetupT3.mdx
index 9cf2b126d..723c95a7b 100644
--- a/src/content/docs/Products/OnTrack/Ontrack Setup/TutorialSetupT3.mdx
+++ b/src/content/docs/Products/OnTrack/Ontrack Setup/TutorialSetupT3.mdx
@@ -10,12 +10,14 @@ Before starting the setup, ensure the following are installed and ready:
- Git
- A GitHub account (used to fork the repositories)
-If you have attempted the setup previously and encountered errors. It is recommended to restart
-Docker Desktop Follow this guide from the beginning to avoid configuration issues.
+If you have attempted the setup previously and encountered errors. It is
+recommended to restart Docker Desktop Follow this guide from the beginning to
+avoid configuration issues.
## Video walkthrough
-If you prefer a visual guide, you can watch the full OnTrack development setup walkthrough here:
+If you prefer a visual guide, you can watch the full OnTrack development setup
+walkthrough here:
▶ **OnTrack development setup video**
@@ -47,11 +49,13 @@ IMPORTANT: Keep Docker Desktop running throughout the development process.
## 2. FORKING THE REQUIRED REPOSITORIES
-OnTrack consists of three main repositories that need to be forked from Doubtfire LMS:
+OnTrack consists of three main repositories that need to be forked from
+Doubtfire LMS:
### Required Repositories
-OnTrack consists of three main repositories that need to be forked from Doubtfire LMS:
+OnTrack consists of three main repositories that need to be forked from
+Doubtfire LMS:
1. `doubtfire-deploy`
- Contains `docker-compose` configuration
@@ -130,14 +134,17 @@ Steps:
or
`cd C:\Users\YOUR_USERNAME\Documents\Projects`
-- Clone `doubtfire-deploy` first: `git clone https://github.com/YOUR_USERNAME/doubtfire-deploy.git`
+- Clone `doubtfire-deploy` first:
+ `git clone https://github.com/YOUR_USERNAME/doubtfire-deploy.git`
- Navigate into the directory: `cd doubtfire-deploy`
-- Clone the remaining repositories: `git clone https://github.com/YOUR_USERNAME/doubtfire-api.git`
+- Clone the remaining repositories:
+ `git clone https://github.com/YOUR_USERNAME/doubtfire-api.git`
`git clone https://github.com/YOUR_USERNAME/doubtfire-web.git`
-- (Optional) Clone `doubtfire-lti`: `git clone https://github.com/YOUR_USERNAME/doubtfire-lti.git`
+- (Optional) Clone `doubtfire-lti`:
+ `git clone https://github.com/YOUR_USERNAME/doubtfire-lti.git`
You should now have: `doubtfire-deploy/`
`├── doubtfire-api/`
@@ -146,8 +153,8 @@ You should now have: `doubtfire-deploy/`
## 5. SETTING UP GIT REMOTES
-Git remotes allow you to sync with both your fork (`origin`) and the ThothTech repository
-(`upstream`).
+Git remotes allow you to sync with both your fork (`origin`) and the ThothTech
+repository (`upstream`).
Understanding remotes:
diff --git a/src/content/docs/Products/OnTrack/Projects/Group Task Submission/group-task-submission-doc.md b/src/content/docs/Products/OnTrack/Projects/Group Task Submission/group-task-submission-doc.md
index 78730c6af..c6a612187 100644
--- a/src/content/docs/Products/OnTrack/Projects/Group Task Submission/group-task-submission-doc.md
+++ b/src/content/docs/Products/OnTrack/Projects/Group Task Submission/group-task-submission-doc.md
@@ -4,100 +4,106 @@ title: Design a way to improve the group Task submission - Documen
## Solution 1: Selecting Students Who Can Submit
-Solution 1 requires certain changes to be made in the frontend and backend which are described as
-follows:
+Solution 1 requires certain changes to be made in the frontend and backend which
+are described as follows:
## Frontend Changes
- **User Interface Updates:**
-The frontend interface needs to be updated to allow instructors to select which students are
-eligible to submit a particular task. This could involve adding a checkbox or similar UI element for
-each student when creating or configuring a task.
+The frontend interface needs to be updated to allow instructors to select which
+students are eligible to submit a particular task. This could involve adding a
+checkbox or similar UI element for each student when creating or configuring a
+task.
- **Task Submission:**
-The task submission process for students should include a check to determine if they are eligible to
-submit based on the CanSubmitTask attribute. If they are not eligible, an appropriate error message
-should be displayed.
+The task submission process for students should include a check to determine if
+they are eligible to submit based on the CanSubmitTask attribute. If they are
+not eligible, an appropriate error message should be displayed.
- **Task Status Display:**
-The frontend should display the submission status of each task, showing whether it has been
-submitted or not.
+The frontend should display the submission status of each task, showing whether
+it has been submitted or not.
**Backend Changes:**
- **Database Schema Updates:**
-The database schema needs to be updated to include the CanSubmitTask attribute in the User table.
+The database schema needs to be updated to include the CanSubmitTask attribute
+in the User table.
- **Task Submission Logic:**
-The backend logic for task submission should check the CanSubmitTask attribute of the user to
-determine whether the submission is allowed. If allowed, update the SubmissionStatus attribute of
-the associated task to "Submitted."
+The backend logic for task submission should check the CanSubmitTask attribute
+of the user to determine whether the submission is allowed. If allowed, update
+the SubmissionStatus attribute of the associated task to "Submitted."
- **API Endpoints:**
-New API endpoints might be needed to manage task submission eligibility, such as updating the
-CanSubmitTask attribute for users.
+New API endpoints might be needed to manage task submission eligibility, such as
+updating the CanSubmitTask attribute for users.
- **Data Validation:**
-Backend logic should validate that only eligible students can be associated with tasks when creating
-or updating tasks.
+Backend logic should validate that only eligible students can be associated with
+tasks when creating or updating tasks.
- **Error Handling:**
-Proper error handling and status codes should be implemented to handle cases where submission is not
-allowed.
+Proper error handling and status codes should be implemented to handle cases
+where submission is not allowed.
- **Notifications:**
-Instructors may want to be notified when a student submits a task, or when a submission is rejected
-due to eligibility.
+Instructors may want to be notified when a student submits a task, or when a
+submission is rejected due to eligibility.
## Solution 2: Adding Password for Certain Students
-Solution 2 requires certain changes to be made in the frontend and backend which are described as
-follows:
+Solution 2 requires certain changes to be made in the frontend and backend which
+are described as follows:
## Frontend Changes``
- **User Interface Updates:**
-Modify the user interface to prompt students for their submission password when attempting to submit
-a task.
+Modify the user interface to prompt students for their submission password when
+attempting to submit a task.
- **Task Submission Form:**
-Add a field for students to enter their submission password while submitting a task.
+Add a field for students to enter their submission password while submitting a
+task.
- **Submission Validation:**
-Implement frontend logic to validate the submission password before allowing the task submission.
-Display appropriate messages if the password is incorrect.
+Implement frontend logic to validate the submission password before allowing the
+task submission. Display appropriate messages if the password is incorrect.
## Backend Changes
- **Database Schema Updates:**
-Update the database schema to include the SubmissionPassword attribute in the User and Task tables.
+Update the database schema to include the SubmissionPassword attribute in the
+User and Task tables.
- **Task Submission Logic:**
-Implement backend logic to compare the user's submitted password with the stored password. If they
-match, update the SubmissionStatus attribute of the associated task to "Submitted."
+Implement backend logic to compare the user's submitted password with the stored
+password. If they match, update the SubmissionStatus attribute of the associated
+task to "Submitted."
- **API Endpoints:**
-Create new API endpoints to handle the password validation during task submission.
+Create new API endpoints to handle the password validation during task
+submission.
- **Data Validation:**
-Implement backend data validation to ensure that only eligible users with the correct password can
-submit tasks.
+Implement backend data validation to ensure that only eligible users with the
+correct password can submit tasks.
- **Error Handling:**
@@ -105,14 +111,15 @@ Implement proper error handling for password validation and submission process.
- **Notifications:**
-Consider implementing notifications to inform users about successful or unsuccessful task
-submissions.
+Consider implementing notifications to inform users about successful or
+unsuccessful task submissions.
- **Security Measures:**
-Implement secure password storage practices (such as hashing and salting) to protect user passwords.
+Implement secure password storage practices (such as hashing and salting) to
+protect user passwords.
- **Password Management:**
-Provide a way for users to reset their submission password if needed and, ensure secure password
-reset procedures.
+Provide a way for users to reset their submission password if needed and, ensure
+secure password reset procedures.
diff --git a/src/content/docs/Products/OnTrack/Projects/Group Task Submission/group-task-submission-uml-design.md b/src/content/docs/Products/OnTrack/Projects/Group Task Submission/group-task-submission-uml-design.md
index 87cdbb012..bc6d399a1 100644
--- a/src/content/docs/Products/OnTrack/Projects/Group Task Submission/group-task-submission-uml-design.md
+++ b/src/content/docs/Products/OnTrack/Projects/Group Task Submission/group-task-submission-uml-design.md
@@ -4,26 +4,29 @@ title: Design a way to improve the group Task submission – UML design
## Solution 1: Selecting Students Who Can Submit
-In this solution, the system allows the instructor to specify which students are eligible to
-submitthe group task. Each user (student) is associated with a task through a one-to-one
-relationship, indicated by the "1" multiplicity on both sides of the association line. The
-CanSubmitTask attribute is a boolean alue indicating whether a user can submit the task. The
-SubmissionStatus attribute in the Task class reflects whether a task has been submitted or not. When
-a user submits the task, the system checks the CanSubmitTask attribute to determine if the
-submission is allowed. If allowed, the SubmissionStatus attribute of the task is updated to
-"Submitted." If not, the submission is rejected.
+In this solution, the system allows the instructor to specify which students are
+eligible to submitthe group task. Each user (student) is associated with a task
+through a one-to-one relationship, indicated by the "1" multiplicity on both
+sides of the association line. The CanSubmitTask attribute is a boolean alue
+indicating whether a user can submit the task. The SubmissionStatus attribute in
+the Task class reflects whether a task has been submitted or not. When a user
+submits the task, the system checks the CanSubmitTask attribute to determine if
+the submission is allowed. If allowed, the SubmissionStatus attribute of the
+task is updated to "Submitted." If not, the submission is rejected.

## Solution 2: Adding Password for Certain Students
-In this solution, a password-based approach is used to control task submissions. Each user (student)
-is associated with a task through a one-to-one relationship, indicated by the "1" multiplicity on
-both sides of the association line. Each user has a unique SubmissionPassword attribute acting as a
-password for task submission. Similarly, the Task class also has a SubmissionPassword attribute.
-When a user attempts to submit a task, they need to provide their SubmissionPassword. The system
-validates this password against the user's stored password. If the passwords match, the task is
-considered submitted, and the SubmissionStatus attribute of the task is updated to "Submitted."
-Otherwise, the submission is rejected.
+In this solution, a password-based approach is used to control task submissions.
+Each user (student) is associated with a task through a one-to-one relationship,
+indicated by the "1" multiplicity on both sides of the association line. Each
+user has a unique SubmissionPassword attribute acting as a password for task
+submission. Similarly, the Task class also has a SubmissionPassword attribute.
+When a user attempts to submit a task, they need to provide their
+SubmissionPassword. The system validates this password against the user's stored
+password. If the passwords match, the task is considered submitted, and the
+SubmissionStatus attribute of the task is updated to "Submitted." Otherwise, the
+submission is rejected.

diff --git a/src/content/docs/Products/OnTrack/Projects/Numbas/NumbasFeasabilityCheck.md b/src/content/docs/Products/OnTrack/Projects/Numbas/NumbasFeasabilityCheck.md
index 26317c3c6..0efe24476 100644
--- a/src/content/docs/Products/OnTrack/Projects/Numbas/NumbasFeasabilityCheck.md
+++ b/src/content/docs/Products/OnTrack/Projects/Numbas/NumbasFeasabilityCheck.md
@@ -12,13 +12,14 @@ title: Project feasability study document
## Preamble
-The aim of this study is to check the feasibility of setting up or linking Numbas into Ontrack a
-live production environment running on Rails/Angular.
+The aim of this study is to check the feasibility of setting up or linking
+Numbas into Ontrack a live production environment running on Rails/Angular.
## Research information
-For this project I have been reviewing several links and pages of information to ensure we take the
-correct direction. As well as to upskill to ensure I have the key skills required for this project.
+For this project I have been reviewing several links and pages of information to
+ensure we take the correct direction. As well as to upskill to ensure I have the
+key skills required for this project.
[https://angular.io/guide/standalone-components](https://angular.io/guide/standalone-components)
@@ -32,17 +33,19 @@ correct direction. As well as to upskill to ensure I have the key skills require
## Outcome
-So after some research the two main ways we can approach this task is to embed an iframe, then later
-capture the test objecet and store it.
+So after some research the two main ways we can approach this task is to embed
+an iframe, then later capture the test objecet and store it.
-Or we can use the local NPM package and install Numbas as a package and configure and run the tests
-natively.
+Or we can use the local NPM package and install Numbas as a package and
+configure and run the tests natively.
-The second option initially looks more secure and longer to setup, I was concerned about iFrame from
-a security risk related to XSS, however it looks like in Angular 15 this was resolved.
+The second option initially looks more secure and longer to setup, I was
+concerned about iFrame from a security risk related to XSS, however it looks
+like in Angular 15 this was resolved.
## Plan
-I will look at configuring both solutions and see which one performs best and gives us the best
-features moving forward. Hopefully by week 6 a have a trial version of both and make the final
-decision before tidying up the code to ensure it is production ready.
+I will look at configuring both solutions and see which one performs best and
+gives us the best features moving forward. Hopefully by week 6 a have a trial
+version of both and make the final decision before tidying up the code to ensure
+it is production ready.
diff --git a/src/content/docs/Products/OnTrack/Projects/Numbas/NumbasProjectGuideline.md b/src/content/docs/Products/OnTrack/Projects/Numbas/NumbasProjectGuideline.md
index 7fada5466..3f3f144b4 100644
--- a/src/content/docs/Products/OnTrack/Projects/Numbas/NumbasProjectGuideline.md
+++ b/src/content/docs/Products/OnTrack/Projects/Numbas/NumbasProjectGuideline.md
@@ -8,27 +8,29 @@ title: Project guideline document
## Aim
-The aim is to introduce Numbas tests into the Ontrack platform, to ultimately save these results in
-the Database.
+The aim is to introduce Numbas tests into the Ontrack platform, to ultimately
+save these results in the Database.
-The first part of the project will be to enable the creation and use of a Numbas test, without
-storing the data, then time dependent we will look at enhancing this to saving the data to the
-backend and a means to access that information.
+The first part of the project will be to enable the creation and use of a Numbas
+test, without storing the data, then time dependent we will look at enhancing
+this to saving the data to the backend and a means to access that information.
## Key Outcomes
-- Feasibility report of product – Check how this can be achieved and the potential problems with the
- possible solutions.
-- Front end integration to either host or link an existing numbas test for students to complete.
+- Feasibility report of product – Check how this can be achieved and the
+ potential problems with the possible solutions.
+- Front end integration to either host or link an existing numbas test for
+ students to complete.
- Back-end configuration to store the numbas test
- Security check of the component
## Delivery Time frames
-I will be aiming to have the feasibility report and the Front end integration working to a standard
-of production by the end of T1 2023.
+I will be aiming to have the feasibility report and the Front end integration
+working to a standard of production by the end of T1 2023.
-We will be aiming for Back end configuration and storing of tests by the end of T2 2023.
+We will be aiming for Back end configuration and storing of tests by the end of
+T2 2023.
## Team Members
diff --git a/src/content/docs/Products/OnTrack/Projects/Numbas/ProjectSignOffNumbas.md b/src/content/docs/Products/OnTrack/Projects/Numbas/ProjectSignOffNumbas.md
index 38f8f7b73..60b73e4bf 100644
--- a/src/content/docs/Products/OnTrack/Projects/Numbas/ProjectSignOffNumbas.md
+++ b/src/content/docs/Products/OnTrack/Projects/Numbas/ProjectSignOffNumbas.md
@@ -10,10 +10,11 @@ title: Project Name:Numbas Integration
## **Scope**
-The purpose of this project is to integrate Numbas testing into Ontrack. With the aim for a test to
-be presented to the student on submission of a task, prior to submitting reflections or other
-required documents. The aim is o be able to let the Unit Chair import, setup and assign tests to the
-tasks. Also for students to be able to complete tests as part of the submission process.
+The purpose of this project is to integrate Numbas testing into Ontrack. With
+the aim for a test to be presented to the student on submission of a task, prior
+to submitting reflections or other required documents. The aim is o be able to
+let the Unit Chair import, setup and assign tests to the tasks. Also for
+students to be able to complete tests as part of the submission process.
## **Outcomes**
@@ -23,7 +24,8 @@ This project will be deliverying:
- A feasiblity study of the ways this can be implemented.
- A rough design document including:
- Rough hand drawn design documents for how this intergration will work.
- - A data flow diagram of how different data will be accesssed and encapsualted.
+ - A data flow diagram of how different data will be accesssed and
+ encapsualted.
- Diagrams showing model changes to the core model
- Backend coding changes to accomodate and store the tests.
- Front end code to support the changes from the Unit chair and student view.
@@ -40,25 +42,30 @@ This project will be deliverying:
- Unit chairs can upload Numbas tests to a task definition
- Unit chairs can validate that the test works
- Unit chairs can set the required pass level for the test
- - Unit chairs can set the number of attempts before test needs to be reset by a tutor
- - Unit chairs can set the delay between attempts to be a set number of minutes, or a built-in
- increasing delay
- - Students are required to pass the test before they can submit work for feedback
+ - Unit chairs can set the number of attempts before test needs to be reset by
+ a tutor
+ - Unit chairs can set the delay between attempts to be a set number of
+ minutes, or a built-in increasing delay
+ - Students are required to pass the test before they can submit work for
+ feedback
- Students can view their test attempts (can unit chairs disable this?)
- Tutors can view student test attempts
- - Tutors can reset student tests to enable additional attempts - or require resit on resubmission
+ - Tutors can reset student tests to enable additional attempts - or require
+ resit on resubmission
- Test results are included in the portfolio when generated
-There will need to be a means to upload the test files that are created locally via Numbas. An
-Addtional window after "requesting feedback" on a task that will present the test, this will then
-either take you to the next stage if you pass or go back to the task screen if you do not pass.
+There will need to be a means to upload the test files that are created locally
+via Numbas. An Addtional window after "requesting feedback" on a task that will
+present the test, this will then either take you to the next stage if you pass
+or go back to the task screen if you do not pass.
-There will need to be a configuration section within the Unit chair task setup page.
+There will need to be a configuration section within the Unit chair task setup
+page.
We will provide different options for the test setup such as:
-1: Restricted / Unlimited test attempts 2: Delays in test attempts - minutes, or built-in
-increamenting delay
+1: Restricted / Unlimited test attempts 2: Delays in test attempts - minutes, or
+built-in increamenting delay
We will also need to either enable or disable a test.
@@ -68,16 +75,19 @@ We will also need to either enable or disable a test.
**Data Flow Design** 
-In terms of the changes we will require, we will need a new table in the DB to store the tests.
+In terms of the changes we will require, we will need a new table in the DB to
+store the tests.
-We will need to create a new API and model to transfer the test data between the front and the back
-end.
+We will need to create a new API and model to transfer the test data between the
+front and the back end.
-We will need to create a new service and model in Angular to accomodate this, we will also need to
-adjust the existing services such as Unit/Tasks to include the test objects for a student user.
+We will need to create a new service and model in Angular to accomodate this, we
+will also need to adjust the existing services such as Unit/Tasks to include the
+test objects for a student user.
-Then we will need to create a new componente for taking the test, as well as adjust the Unit Chair
-Task Setup component to include the new settings as per the design above.
+Then we will need to create a new componente for taking the test, as well as
+adjust the Unit Chair Task Setup component to include the new settings as per
+the design above.
## **Sign Off:**
@@ -87,4 +97,5 @@ Delivery Lead Signature:
Team Member Signature:
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
-Client Signature: \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
+Client Signature:
+\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
diff --git a/src/content/docs/Products/OnTrack/Projects/Numbas/ProjectSignOffTemplate.md b/src/content/docs/Products/OnTrack/Projects/Numbas/ProjectSignOffTemplate.md
index 2407a9a50..0bdfb00fb 100644
--- a/src/content/docs/Products/OnTrack/Projects/Numbas/ProjectSignOffTemplate.md
+++ b/src/content/docs/Products/OnTrack/Projects/Numbas/ProjectSignOffTemplate.md
@@ -12,7 +12,8 @@ title: Project Documentation Template
## **Scope**
-\<\\>
+\<\\>
## **Outcomes**
@@ -20,8 +21,8 @@ title: Project Documentation Template
## **Delivery**
-\<\\>
+\<\\>
\<\< Also include Delivery times for stages of the project\>\>
@@ -33,4 +34,5 @@ Delivery Lead Signature:
Team Member Signature:
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
-Client Signature: \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
+Client Signature:
+\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
diff --git a/src/content/docs/Products/OnTrack/Projects/Numbas/SpikeOutcome-Scorm2004.md b/src/content/docs/Products/OnTrack/Projects/Numbas/SpikeOutcome-Scorm2004.md
index e94bd3acc..04d77eeca 100644
--- a/src/content/docs/Products/OnTrack/Projects/Numbas/SpikeOutcome-Scorm2004.md
+++ b/src/content/docs/Products/OnTrack/Projects/Numbas/SpikeOutcome-Scorm2004.md
@@ -10,9 +10,9 @@ title: Spike Outcomes
## Goals & Deliverables
-After several issues with the NUMBAS integration and having to move to SCORM 2004, this Spike is
-intended to revise the updated functionality of SCORM 2004 and how it works, how the functions can
-be used in the current project.
+After several issues with the NUMBAS integration and having to move to SCORM
+2004, this Spike is intended to revise the updated functionality of SCORM 2004
+and how it works, how the functions can be used in the current project.
## Technologies Tools and Resources used
@@ -39,7 +39,8 @@ Key Tasks
## What we found out
-There are some big changes between SCORM 1.1 and 2004, the key methods used in 2004 are:
+There are some big changes between SCORM 1.1 and 2004, the key methods used in
+2004 are:
- Initialize( “” ) : bool – Begins a communication session with the LMS.
@@ -47,68 +48,74 @@ There are some big changes between SCORM 1.1 and 2004, the key methods used in 2
- GetValue( element : CMIElement ) : string – Retrieves a value from the LMS.
-- SetValue( element : CMIElement, value : string) : string – Saves a value to the LMS.
+- SetValue( element : CMIElement, value : string) : string – Saves a value to
+ the LMS.
-- Commit( “” ) : bool – Indicates to the LMS that all data should be persisted (not required).
+- Commit( “” ) : bool – Indicates to the LMS that all data should be persisted
+ (not required).
-- GetLastError() : CMIErrorCode – Returns the error code that resulted from the last API call.
+- GetLastError() : CMIErrorCode – Returns the error code that resulted from the
+ last API call.
-- GetErrorString( errorCode : CMIErrorCode ) : string – Returns a short string describing the
- specified error code.
+- GetErrorString( errorCode : CMIErrorCode ) : string – Returns a short string
+ describing the specified error code.
-- GetDiagnostic( errorCode : CMIErrorCode ) : string – Returns detailed information about the last
- error that occurred.
+- GetDiagnostic( errorCode : CMIErrorCode ) : string – Returns detailed
+ information about the last error that occurred.
They Key Data Model elements for our use are:
-- cmi.entry (ab_initio, resume, “”, RO) Asserts whether the learner has previously accessed the SCO
+- cmi.entry (ab_initio, resume, “”, RO) Asserts whether the learner has
+ previously accessed the SCO
-- cmi.exit (timeout, suspend, logout, normal, “”, WO) Indicates how or why the learner left the SCO
+- cmi.exit (timeout, suspend, logout, normal, “”, WO) Indicates how or why the
+ learner left the SCO
-- cmi.learner_id (long_identifier_type (SPM: 4000), RO) Identifies the learner on behalf of whom the
- SCO was launched
+- cmi.learner_id (long_identifier_type (SPM: 4000), RO) Identifies the learner
+ on behalf of whom the SCO was launched
-- cmi.mode (“browse”, “normal”, “review”, RO) Identifies one of three possible modes in which the
- SCO may be presented to the learner
+- cmi.mode (“browse”, “normal”, “review”, RO) Identifies one of three possible
+ modes in which the SCO may be presented to the learner
-- cmi.score.scaled (real (10,7) range (-1..1), RW) Number that reflects the performance of the
- learner
+- cmi.score.scaled (real (10,7) range (-1..1), RW) Number that reflects the
+ performance of the learner
-- cmi.score.raw (real (10,7), RW) Number that reflects the performance of the learner relative to
- the range bounded by the values of min and max
+- cmi.score.raw (real (10,7), RW) Number that reflects the performance of the
+ learner relative to the range bounded by the values of min and max
- cmi.score.min (real (10,7), RW) Minimum value in the range for the raw score
- cmi.score.max (real (10,7), RW) Maximum value in the range for the raw score
-- cmi.suspend_data (characterstring (SPM: 64000), RW) Provides space to store and retrieve data
- between learner sessions
+- cmi.suspend_data (characterstring (SPM: 64000), RW) Provides space to store
+ and retrieve data between learner sessions
-- cmi.total_time (timeinterval (second,10,2), RO) Sum of all of the learner’s session times
- accumulated in the current learner attempt
+- cmi.total_time (timeinterval (second,10,2), RO) Sum of all of the learner’s
+ session times accumulated in the current learner attempt
-- cmi.success_status (“passed”, “failed”, “unknown”, RW) Indicates whether the learner has mastered
- the SCO
+- cmi.success_status (“passed”, “failed”, “unknown”, RW) Indicates whether the
+ learner has mastered the SCO
-- cmi.session_time (time interval (second,10,2), WO) Amount of time that the learner has spent in
- the current learner session for this SCO
+- cmi.session_time (time interval (second,10,2), WO) Amount of time that the
+ learner has spent in the current learner session for this SCO
-By making use of the flags in the data model we can implement a resume test functionality, this will
-be done by saving the suspend data json string in the DB.
+By making use of the flags in the data model we can implement a resume test
+functionality, this will be done by saving the suspend data json string in the
+DB.
-A new end point will need to be created to store the suspend data json string, as well as the
-Attempt number, status and isnew flag.
+A new end point will need to be created to store the suspend data json string,
+as well as the Attempt number, status and isnew flag.
## Recommendations
-It is reccomended to build out the new endpoint needed in the PoC and then to implement a resume
-test functionality there
+It is reccomended to build out the new endpoint needed in the PoC and then to
+implement a resume test functionality there
-prior to starting the build in Ontrack. The new Endpoint would be in addition to the current Numbas
-API that streams the test.
+prior to starting the build in Ontrack. The new Endpoint would be in addition to
+the current Numbas API that streams the test.
-It would need a get and put functionlaity and would be making use of cmi.mode, cmi.suspend_data
-during both Commit and Terminate functions.
+It would need a get and put functionlaity and would be making use of cmi.mode,
+cmi.suspend_data during both Commit and Terminate functions.
-We also need to move the intitaliztion of the Window object outside of the launch test in the front
-end Angular code.
+We also need to move the intitaliztion of the Window object outside of the
+launch test in the front end Angular code.
diff --git a/src/content/docs/Products/OnTrack/Projects/Staff Grant Extension/DESIGN.md b/src/content/docs/Products/OnTrack/Projects/Staff Grant Extension/DESIGN.md
index cb2e044fa..33e8100e1 100644
--- a/src/content/docs/Products/OnTrack/Projects/Staff Grant Extension/DESIGN.md
+++ b/src/content/docs/Products/OnTrack/Projects/Staff Grant Extension/DESIGN.md
@@ -4,169 +4,193 @@ title: Design Document:OnTrack - Staff Grant Extension Featur
## 1-Introduction
-This document outlines the design approach for integrating the "Staff Grant Extension" feature into
-OnTrack, our learning management system. This feature empowers staff members to grant extensions to
-students, even without formal requests. The purpose is to cater to unique situations that might
-require tailored support.
-
-The "Staff Grant Extension" feature enables staff to initiate extension requests, define durations,
-and create extension records for students. The system automates notifications to students about
-granted extensions, fostering transparent communication.
-
-This design document covers technical implementation, user authentication, UI, error handling,
-testing, and deployment aspects. It ensures the feature's smooth integration, responsiveness,
-security, and scalability. The goal is to enhance OnTrack's adaptability and communication,
-ultimately benefiting both staff and students.
+This document outlines the design approach for integrating the "Staff Grant
+Extension" feature into OnTrack, our learning management system. This feature
+empowers staff members to grant extensions to students, even without formal
+requests. The purpose is to cater to unique situations that might require
+tailored support.
+
+The "Staff Grant Extension" feature enables staff to initiate extension
+requests, define durations, and create extension records for students. The
+system automates notifications to students about granted extensions, fostering
+transparent communication.
+
+This design document covers technical implementation, user authentication, UI,
+error handling, testing, and deployment aspects. It ensures the feature's smooth
+integration, responsiveness, security, and scalability. The goal is to enhance
+OnTrack's adaptability and communication, ultimately benefiting both staff and
+students.
## 2-Use Case
### 2-1-User Story
-As a staff member, I want to be able to grant extensions to students, even when no formal extension
-requests are submitted through the system. This allows me to accommodate special circumstances that
-may have been communicated through other means.
+As a staff member, I want to be able to grant extensions to students, even when
+no formal extension requests are submitted through the system. This allows me to
+accommodate special circumstances that may have been communicated through other
+means.
### 2-2-Acceptance Criteria
- Staff members can initiate extension requests for specific students.
- Staff members can specify the duration of the extension.
-- Extension requests initiated by staff members are recorded in the system for future reference.
+- Extension requests initiated by staff members are recorded in the system for
+ future reference.
- Students receive notifications about granted extensions.
## 3-High-level Architecture
-The "Staff Grant Extension" feature seamlessly integrates into the existing architecture of the
-OnTrack system. This architecture consists of both frontend and backend components, each
-contributing to the feature's functionality and user experience.
+The "Staff Grant Extension" feature seamlessly integrates into the existing
+architecture of the OnTrack system. This architecture consists of both frontend
+and backend components, each contributing to the feature's functionality and
+user experience.
### 3-1-Frontend Architecture
-The frontend of the feature is designed to provide an intuitive and user-friendly interface for
-staff members to initiate extension requests. The key components include:
+The frontend of the feature is designed to provide an intuitive and
+user-friendly interface for staff members to initiate extension requests. The
+key components include:
-- **Grant Extension Form:** Integrated into the staff dashboard, this form enables staff members to
- select a student, specify the extension duration, add relevant notes, and indicate the reason for
- the extension. A search interface allows easy student selection. This form is built using Angular
- and styled with Tailwind CSS for a consistent and responsive user experience.
+- **Grant Extension Form:** Integrated into the staff dashboard, this form
+ enables staff members to select a student, specify the extension duration, add
+ relevant notes, and indicate the reason for the extension. A search interface
+ allows easy student selection. This form is built using Angular and styled
+ with Tailwind CSS for a consistent and responsive user experience.
-- **Notifications**: The frontend also handles the notifications sent to students. Upon a staff
- member's extension grant, notifications are triggered. The frontend ensures these notifications
- are displayed to students, either through email or within the system.
+- **Notifications**: The frontend also handles the notifications sent to
+ students. Upon a staff member's extension grant, notifications are triggered.
+ The frontend ensures these notifications are displayed to students, either
+ through email or within the system.
### 3-2 Backend Architecture
-The backend architecture focuses on processing extension requests, managing extension records, and
-ensuring data security. Backend components include:
+The backend architecture focuses on processing extension requests, managing
+extension records, and ensuring data security. Backend components include:
-- **Extension Record Management:** When a staff member grants an extension, the backend stores
- extension records in the database. These records are associated with the student and staff member
- involved, along with the specified extension duration. Ruby on Rails, the backend framework,
- manages data interactions and database updates.
+- **Extension Record Management:** When a staff member grants an extension, the
+ backend stores extension records in the database. These records are associated
+ with the student and staff member involved, along with the specified extension
+ duration. Ruby on Rails, the backend framework, manages data interactions and
+ database updates.
-- **User Authentication and Authorisation:** The backend enforces user authentication to ensure that
- only authorised staff members can access the "Grant Extension" functionality. Access controls are
- implemented to secure data and maintain system integrity.
+- **User Authentication and Authorisation:** The backend enforces user
+ authentication to ensure that only authorised staff members can access the
+ "Grant Extension" functionality. Access controls are implemented to secure
+ data and maintain system integrity.
-The collaboration between frontend and backend components ensures a cohesive user experience. Staff
-members interact with the intuitive form at the frontend, triggering backend processes that record
-extension data securely in the database. Meanwhile, students receive notifications regarding granted
+The collaboration between frontend and backend components ensures a cohesive
+user experience. Staff members interact with the intuitive form at the frontend,
+triggering backend processes that record extension data securely in the
+database. Meanwhile, students receive notifications regarding granted
extensions, enhancing communication and transparency.
-This architecture underscores the feature's user-centric design, smooth integration with existing
-systems, and adherence to best practices for security and usability. The separation of frontend and
-backend responsibilities enables efficient development, testing, and maintenance, contributing to
-the feature's overall success within the OnTrack system.
+This architecture underscores the feature's user-centric design, smooth
+integration with existing systems, and adherence to best practices for security
+and usability. The separation of frontend and backend responsibilities enables
+efficient development, testing, and maintenance, contributing to the feature's
+overall success within the OnTrack system.
## 4-Technical Implementation
-The implementation of the "Staff Grant Extension" feature involves both frontend and backend
-development, utilizing the existing technology stack of the OnTrack system.
+The implementation of the "Staff Grant Extension" feature involves both frontend
+and backend development, utilizing the existing technology stack of the OnTrack
+system.
### 4-1 Frontend Implementation
#### _Grant Extension Form:_
-The frontend implementation revolves around the creation of the "Grant Extension" form within the
-staff dashboard. This involves the following steps:
+The frontend implementation revolves around the creation of the "Grant
+Extension" form within the staff dashboard. This involves the following steps:
-- **UI Integration:** Integrate the form seamlessly into the staff dashboard using Angular
- components. Ensure its responsive design using Tailwind CSS, providing a user-friendly experience
- across devices.
+- **UI Integration:** Integrate the form seamlessly into the staff dashboard
+ using Angular components. Ensure its responsive design using Tailwind CSS,
+ providing a user-friendly experience across devices.
-- **Form Fields:** Implement form fields for selecting students, entering extension duration, adding
- notes, and specifying the reason for the extension. Create an interface for searching and
- selecting students efficiently.
+- **Form Fields:** Implement form fields for selecting students, entering
+ extension duration, adding notes, and specifying the reason for the extension.
+ Create an interface for searching and selecting students efficiently.
#### _Notifications:_
The frontend is responsible for handling notifications sent to students:
-- **Notification Trigger:** Upon extension grant, trigger the notification mechanism. Depending on
- student preferences, notifications are sent either via email or displayed within the system.
+- **Notification Trigger:** Upon extension grant, trigger the notification
+ mechanism. Depending on student preferences, notifications are sent either via
+ email or displayed within the system.
### 4-2 Backend Implementation
#### _Extension Record Creation:_
-Backend implementation focuses on processing extension requests, managing extension records, and
-ensuring secure data handling:
+Backend implementation focuses on processing extension requests, managing
+extension records, and ensuring secure data handling:
-- **API Endpoint:** Create an API endpoint to handle extension grant requests from the frontend.
- Validate inputs, including extension duration.
+- **API Endpoint:** Create an API endpoint to handle extension grant requests
+ from the frontend. Validate inputs, including extension duration.
-- **Database Interaction:** Upon successful validation, store extension records in the database.
- Associate each record with the relevant student and staff member. Utilize Ruby on Rails' ORM
- (Object-Relational Mapping) for seamless data management.
+- **Database Interaction:** Upon successful validation, store extension records
+ in the database. Associate each record with the relevant student and staff
+ member. Utilize Ruby on Rails' ORM (Object-Relational Mapping) for seamless
+ data management.
#### _User Authentication and Authorisation:_
Implement user authentication and authorisation to secure the feature:
-- **Authentication:** Leverage existing authentication mechanisms to ensure only authenticated staff
- members access the "Grant Extension" functionality.
+- **Authentication:** Leverage existing authentication mechanisms to ensure only
+ authenticated staff members access the "Grant Extension" functionality.
-- **Authorisation:** Apply access controls to authorise staff members based on their roles and
- permissions. This guarantees data security and minimizes unauthorised access.
+- **Authorisation:** Apply access controls to authorise staff members based on
+ their roles and permissions. This guarantees data security and minimizes
+ unauthorised access.
-The successful integration of the frontend and backend components ensures the seamless operation of
-the feature. Staff members interact with the frontend form, which triggers backend processes to
-store extension records and handle notifications. This technical implementation enhances the OnTrack
-system's capabilities, enabling staff members to provide individualized support to students and
-fostering efficient communication within the platform.
+The successful integration of the frontend and backend components ensures the
+seamless operation of the feature. Staff members interact with the frontend
+form, which triggers backend processes to store extension records and handle
+notifications. This technical implementation enhances the OnTrack system's
+capabilities, enabling staff members to provide individualized support to
+students and fostering efficient communication within the platform.
## 5-Database Design
-The database design for the "Staff Grant Extension" feature revolves around efficiently storing
-extension records and maintaining the associations between students, staff members, and granted
-extensions.
+The database design for the "Staff Grant Extension" feature revolves around
+efficiently storing extension records and maintaining the associations between
+students, staff members, and granted extensions.
### _Extension Records:_
-- **Table:** Create a new table named "ExtensionRecords" to store extension-related information.
+- **Table:** Create a new table named "ExtensionRecords" to store
+ extension-related information.
- **Columns:**
- **id:** Unique identifier for each extension record.
- - **student_id:** Foreign key referencing the student associated with the extension.
- - **staff_member_id:** Foreign key pointing to the staff member who granted the extension.
+ - **student_id:** Foreign key referencing the student associated with the
+ extension.
+ - **staff_member_id:** Foreign key pointing to the staff member who granted
+ the extension.
- **duration:** The duration of the extension in days or hours.
- **reason:** The reason provided for granting the extension.
- **created_at:** Timestamp indicating when the extension record was created.
### _Associations:_
-- **Student and Staff Member Associations:** Establish relationships between extension records,
- students, and staff members using foreign keys.
-- **Extensions to Students:** Link extension records to the respective students they apply to.
-- **Extensions by Staff Members:** Associate extension records with the staff members who granted
- the extensions.
-
-This database design ensures efficient querying and retrieval of extension data, enabling staff
-members to track granted extensions and students to view their extension history.
-
-By adhering to this structured database design, the "Staff Grant Extension" feature effectively
-maintains a historical record of granted extensions and establishes clear relationships between
-students, staff members, and extension records. This architecture guarantees data integrity,
-simplifies reporting and auditing, and contributes to the seamless operation of the feature within
-the OnTrack system.
+- **Student and Staff Member Associations:** Establish relationships between
+ extension records, students, and staff members using foreign keys.
+- **Extensions to Students:** Link extension records to the respective students
+ they apply to.
+- **Extensions by Staff Members:** Associate extension records with the staff
+ members who granted the extensions.
+
+This database design ensures efficient querying and retrieval of extension data,
+enabling staff members to track granted extensions and students to view their
+extension history.
+
+By adhering to this structured database design, the "Staff Grant Extension"
+feature effectively maintains a historical record of granted extensions and
+establishes clear relationships between students, staff members, and extension
+records. This architecture guarantees data integrity, simplifies reporting and
+auditing, and contributes to the seamless operation of the feature within the
+OnTrack system.
[UML - Staff Grant Extension](https://lucid.app/lucidchart/06237ce4-9cd9-4aad-838f-45bff2249059/edit?invitationId=inv_da8c9660-84a6-46a3-9690-f210fc5ceb8d)
@@ -176,199 +200,227 @@ the OnTrack system.
## 7-Error Handling and Validation
-Error handling and validation are critical aspects of ensuring the robustness and reliability of the
-"Staff Grant Extension" feature. The system must effectively handle user errors and input anomalies
-while maintaining data integrity.
+Error handling and validation are critical aspects of ensuring the robustness
+and reliability of the "Staff Grant Extension" feature. The system must
+effectively handle user errors and input anomalies while maintaining data
+integrity.
### Frontend Validation
-- **Form Validation:** Implement client-side form validation to prevent invalid data from being
- submitted. Validate extension duration to ensure it's a positive numeric value.
-- **Error Messages:** Display clear error messages next to the relevant form fields in case of
- validation errors. Inform users about the specific issue and guide them towards correcting it.
+- **Form Validation:** Implement client-side form validation to prevent invalid
+ data from being submitted. Validate extension duration to ensure it's a
+ positive numeric value.
+- **Error Messages:** Display clear error messages next to the relevant form
+ fields in case of validation errors. Inform users about the specific issue and
+ guide them towards correcting it.
### Backend Validation
-- **API Input Validation:** Validate the input received from the frontend at the backend. Ensure
- that the duration is a valid positive numeric value and that all required fields are provided.
-- **Error Responses:** Return appropriate error responses if validation fails. Include relevant
- error codes and messages to guide developers in diagnosing and addressing the issue.
+- **API Input Validation:** Validate the input received from the frontend at the
+ backend. Ensure that the duration is a valid positive numeric value and that
+ all required fields are provided.
+- **Error Responses:** Return appropriate error responses if validation fails.
+ Include relevant error codes and messages to guide developers in diagnosing
+ and addressing the issue.
## Database Integrity
-- **Foreign Key Integrity:** Ensure the integrity of foreign key relationships between extension
- records, students, and staff members. Reject extension creation if associated entities do not
- exist.
-- **Data Consistency:** Maintain consistent data by validating the input against predefined rules
- and constraints. Avoid situations where data conflicts or contradictions arise.
+- **Foreign Key Integrity:** Ensure the integrity of foreign key relationships
+ between extension records, students, and staff members. Reject extension
+ creation if associated entities do not exist.
+- **Data Consistency:** Maintain consistent data by validating the input against
+ predefined rules and constraints. Avoid situations where data conflicts or
+ contradictions arise.
## Exception Handling
-- **Server-Side Errors:** Implement exception handling in the backend to catch unexpected errors
- during processing. Log these errors for debugging purposes and provide users with a friendly error
- message.
+- **Server-Side Errors:** Implement exception handling in the backend to catch
+ unexpected errors during processing. Log these errors for debugging purposes
+ and provide users with a friendly error message.
-- **Client-Facing Errors:** Translate backend errors into user-friendly messages on the frontend to
- maintain a positive user experience.
+- **Client-Facing Errors:** Translate backend errors into user-friendly messages
+ on the frontend to maintain a positive user experience.
-By rigorously implementing error handling and validation mechanisms, the "Staff Grant Extension"
-feature ensures that user inputs are accurate, data integrity is maintained, and users are guided
-through corrective actions when necessary. This approach contributes to a seamless and
-frustration-free experience for both staff members and students, enhancing the overall reliability
-of the OnTrack system.
+By rigorously implementing error handling and validation mechanisms, the "Staff
+Grant Extension" feature ensures that user inputs are accurate, data integrity
+is maintained, and users are guided through corrective actions when necessary.
+This approach contributes to a seamless and frustration-free experience for both
+staff members and students, enhancing the overall reliability of the OnTrack
+system.
## 8-Testing Strategy
-Ensuring the robustness, security, and performance of the "Staff Grant Extension" feature is
-paramount. The testing strategy encompasses both backend and frontend components.
+Ensuring the robustness, security, and performance of the "Staff Grant
+Extension" feature is paramount. The testing strategy encompasses both backend
+and frontend components.
### 8-1 Backend Testing
-The backend testing strategy involves validating the functionality, security, and data integrity of
-the "Staff Grant Extension" feature.
+The backend testing strategy involves validating the functionality, security,
+and data integrity of the "Staff Grant Extension" feature.
### Test Case 1: Successful Extension Granting
-- **Description:** Verify that a staff member can successfully grant an xtension to a student.
+- **Description:** Verify that a staff member can successfully grant an xtension
+ to a student.
- **Steps:**
1. Authenticate as a staff member.
2. Select a student.
3. Enter a valid extension duration.
4. Submit the form.
-- **Expected Outcome:** The extension is granted, a new extension record is created in the database,
- and both staff and student receive notifications.
+- **Expected Outcome:** The extension is granted, a new extension record is
+ created in the database, and both staff and student receive notifications.
### Test Case 2: Invalid Extension Duration
-- **Description:** Test the system's response when a staff member enters an invalid extension
- duration.
+- **Description:** Test the system's response when a staff member enters an
+ invalid extension duration.
- **Steps:**
1. Authenticate as a staff member.
2. Access the "Grant Extension" functionality.
3. Select a student.
4. Enter an invalid extension duration.
5. Submit the form.
-- **Expected Outcome:** The system displays an error message, no extension record is created, and
- the form remains accessible for correction.
+- **Expected Outcome:** The system displays an error message, no extension
+ record is created, and the form remains accessible for correction.
### Test Case 3: Unauthorised Access
-- **Description:** Verify that unauthorised users cannot access the "Grant Extension" functionality.
+- **Description:** Verify that unauthorised users cannot access the "Grant
+ Extension" functionality.
- **Steps:**
- 1. Attempt to access the "Grant Extension" functionality without proper authentication.
-- **Expected Outcome:** The system denies access and displays an appropriate error message.
+ 1. Attempt to access the "Grant Extension" functionality without proper
+ authentication.
+- **Expected Outcome:** The system denies access and displays an appropriate
+ error message.
### Test Case 4: Notification Sent to Student
-- **Description:** Check if the student receives a notification when an extension is granted.
+- **Description:** Check if the student receives a notification when an
+ extension is granted.
- **Steps:**
1. Authenticate as a staff member.
2. Grant an extension to a student.
3. Verify the student's notifications.
-- **Expected Outcome:** The student receives a notification indicating the granted extension and its
- duration.
+- **Expected Outcome:** The student receives a notification indicating the
+ granted extension and its duration.
## 8-2 Frontend Testing
Frontend components will undergo testing to ensure a seamless user experience.
-- **Form Validation Testing:** Validate the form's behaviour when inputs are correct and incorrect,
- ensuring error messages display appropriately.
-- **Integration Testing:** Test the integration of the "Grant Extension" form into the staff
- dashboard, ensuring proper rendering and interaction.
-- **Notification Testing:** Verify that notifications are triggered and displayed correctly for
- students.
-- **Responsive Testing:** Test the form's responsiveness across various devices and screen sizes.
-
-By executing comprehensive backend and frontend tests, we ensure the "Staff Grant Extension" feature
-functions accurately, is secure from unauthorised access, and provides a seamless experience to
-staff and students. Successful testing will lead to a reliable and user-friendly addition to the
-OnTrack system.
+- **Form Validation Testing:** Validate the form's behaviour when inputs are
+ correct and incorrect, ensuring error messages display appropriately.
+- **Integration Testing:** Test the integration of the "Grant Extension" form
+ into the staff dashboard, ensuring proper rendering and interaction.
+- **Notification Testing:** Verify that notifications are triggered and
+ displayed correctly for students.
+- **Responsive Testing:** Test the form's responsiveness across various devices
+ and screen sizes.
+
+By executing comprehensive backend and frontend tests, we ensure the "Staff
+Grant Extension" feature functions accurately, is secure from unauthorised
+access, and provides a seamless experience to staff and students. Successful
+testing will lead to a reliable and user-friendly addition to the OnTrack
+system.
## 9-Deployment Plan
-The deployment plan outlines the steps to smoothly introduce the "Staff Grant Extension" feature
-into the OnTrack system, ensuring minimal disruptions and optimal user experience.
+The deployment plan outlines the steps to smoothly introduce the "Staff Grant
+Extension" feature into the OnTrack system, ensuring minimal disruptions and
+optimal user experience.
### 9-1-Pre-Deployment Preparation
-- Conduct thorough testing of both frontend and backend components, addressing any identified
- issues.
-- Review and ensure that the codebase adheres to coding standards and best practices.
-- Create a backup of the existing system and database to mitigate potential risks during deployment.
+- Conduct thorough testing of both frontend and backend components, addressing
+ any identified issues.
+- Review and ensure that the codebase adheres to coding standards and best
+ practices.
+- Create a backup of the existing system and database to mitigate potential
+ risks during deployment.
### 9-2-Deployment Steps
-- **Database Migration:** Apply necessary database migrations to accommodate the new extension
- records.
+- **Database Migration:** Apply necessary database migrations to accommodate the
+ new extension records.
-- **Backend Deployment:** Deploy the backend changes to the production server. Monitor for any
- errors or anomalies during deployment.
+- **Backend Deployment:** Deploy the backend changes to the production server.
+ Monitor for any errors or anomalies during deployment.
-- **Frontend Deployment:** Deploy the updated frontend components to the production server. Ensure
- that the new form and notifications are integrated seamlessly.
+- **Frontend Deployment:** Deploy the updated frontend components to the
+ production server. Ensure that the new form and notifications are integrated
+ seamlessly.
### 9-3-Post-Deployment Tasks
-- **Data Migration:** If needed, migrate existing data to match the new database schema.
-- **Testing:** Conduct thorough testing in the production environment to ensure everything works as
- expected.
+- **Data Migration:** If needed, migrate existing data to match the new database
+ schema.
+- **Testing:** Conduct thorough testing in the production environment to ensure
+ everything works as expected.
### 9-4-Rollback Plan
-- In case of unforeseen issues during deployment, have a rollback plan in place to revert to the
- previous version of the system.
+- In case of unforeseen issues during deployment, have a rollback plan in place
+ to revert to the previous version of the system.
### 9-5-Communication
-- Notify staff members and users about the upcoming feature addition and any potential downtime
- during deployment.
-- Communicate the benefits and functionality of the "Staff Grant Extension" feature to encourage
- user adoption.
+- Notify staff members and users about the upcoming feature addition and any
+ potential downtime during deployment.
+- Communicate the benefits and functionality of the "Staff Grant Extension"
+ feature to encourage user adoption.
### 9-6-Monitoring and Support
-- Monitor the system closely during the initial days after deployment to detect any unexpected
- behaviors.
-- Provide quick response and support to address any user issues or inquiries related to the new
- feature.
+- Monitor the system closely during the initial days after deployment to detect
+ any unexpected behaviors.
+- Provide quick response and support to address any user issues or inquiries
+ related to the new feature.
### 9-7-Documentation Update
-- Update user documentation, guides, and tutorials to reflect the new "Staff Grant Extension"
- feature.
-- Include instructions for staff members on how to use the new functionality effectively.
+- Update user documentation, guides, and tutorials to reflect the new "Staff
+ Grant Extension" feature.
+- Include instructions for staff members on how to use the new functionality
+ effectively.
### 9-8-Continuous Improvement
-- Gather feedback from staff members and users about their experience with the new feature.
-- Use this feedback to make necessary improvements and enhancements to the feature in future
- updates.
+- Gather feedback from staff members and users about their experience with the
+ new feature.
+- Use this feedback to make necessary improvements and enhancements to the
+ feature in future updates.
-By following this deployment plan, the "Staff Grant Extension" feature will be seamlessly integrated
-into the OnTrack system, offering enhanced capabilities to staff members and students while
-maintaining the stability and reliability of the platform.
+By following this deployment plan, the "Staff Grant Extension" feature will be
+seamlessly integrated into the OnTrack system, offering enhanced capabilities to
+staff members and students while maintaining the stability and reliability of
+the platform.
## 10-Conclusion
-The design document for the "Staff Grant Extension" feature presents a comprehensive blueprint for
-integrating this pivotal enhancement into the OnTrack system. By empowering staff members to grant
-extensions to students, the feature addresses the evolving needs of educational environments,
-ensuring a tailored and adaptable approach to supporting students' unique circumstances.
-
-The "Staff Grant Extension" feature enriches the OnTrack system by seamlessly bridging frontend and
-backend components. Through a user-friendly form, staff members can initiate extension requests,
-specifying durations and reasons, which are then securely stored in the system's database
-Notifications are triggered, enhancing communication with students. Robust error handling,
-validation mechanisms, and stringent security measures ensure data integrity and user confidence.
-
-The envisioned architecture fosters collaboration between students and staff members, enabling
-personalized solutions without disrupting established workflows. The design emphasizes usability,
-scalability, and performance, thereby elevating the user experience across the platform.
-
-Incorporating the "Staff Grant Extension" feature extends OnTrack's capability to adapt and respond
-to students' unique circumstances, fostering a more inclusive and flexible educational experience.
-By following the outlined design principles and implementation strategies, the feature promises
-seamless integration, streamlined functionality, and enhanced communication within the OnTrack
-system. This design document serves as a roadmap to achieving these goals and advancing the
-educational support provided by the platform.
+The design document for the "Staff Grant Extension" feature presents a
+comprehensive blueprint for integrating this pivotal enhancement into the
+OnTrack system. By empowering staff members to grant extensions to students, the
+feature addresses the evolving needs of educational environments, ensuring a
+tailored and adaptable approach to supporting students' unique circumstances.
+
+The "Staff Grant Extension" feature enriches the OnTrack system by seamlessly
+bridging frontend and backend components. Through a user-friendly form, staff
+members can initiate extension requests, specifying durations and reasons, which
+are then securely stored in the system's database Notifications are triggered,
+enhancing communication with students. Robust error handling, validation
+mechanisms, and stringent security measures ensure data integrity and user
+confidence.
+
+The envisioned architecture fosters collaboration between students and staff
+members, enabling personalized solutions without disrupting established
+workflows. The design emphasizes usability, scalability, and performance,
+thereby elevating the user experience across the platform.
+
+Incorporating the "Staff Grant Extension" feature extends OnTrack's capability
+to adapt and respond to students' unique circumstances, fostering a more
+inclusive and flexible educational experience. By following the outlined design
+principles and implementation strategies, the feature promises seamless
+integration, streamlined functionality, and enhanced communication within the
+OnTrack system. This design document serves as a roadmap to achieving these
+goals and advancing the educational support provided by the platform.
diff --git a/src/content/docs/Products/OnTrack/Projects/Staff Grant Extension/REQUIREMENTS.md b/src/content/docs/Products/OnTrack/Projects/Staff Grant Extension/REQUIREMENTS.md
index adc5edadf..7a42815a4 100644
--- a/src/content/docs/Products/OnTrack/Projects/Staff Grant Extension/REQUIREMENTS.md
+++ b/src/content/docs/Products/OnTrack/Projects/Staff Grant Extension/REQUIREMENTS.md
@@ -4,24 +4,27 @@ title: Requirements Document:OnTrack - Staff Grant Extension Feature
## 1-Introduction
-The purpose of this document is to outline the requirements for the implementation of the "Staff
-Grant Extension" feature in the OnTrack (also known as Doubtfire). This feature aims to empower
-staff members to grant extensions to students, even in cases where there are no formal extension
-requests within the system.
+The purpose of this document is to outline the requirements for the
+implementation of the "Staff Grant Extension" feature in the OnTrack (also known
+as Doubtfire). This feature aims to empower staff members to grant extensions to
+students, even in cases where there are no formal extension requests within the
+system.
## 2-Use Case
### 2-1-User Story
-As a staff member, I want to be able to grant extensions to students, even when no formal extension
-requests are submitted through the system. This allows me to accommodate special circumstances that
-may have been communicated through other means.
+As a staff member, I want to be able to grant extensions to students, even when
+no formal extension requests are submitted through the system. This allows me to
+accommodate special circumstances that may have been communicated through other
+means.
### 2-2-Acceptance Criteria
- Staff members can initiate extension requests for specific students.
- Staff members can specify the duration of the extension.
-- Extension requests initiated by staff members are recorded in the system for future reference.
+- Extension requests initiated by staff members are recorded in the system for
+ future reference.
- Students receive notifications about granted extensions.
## 3-Functional Requirements
@@ -30,58 +33,61 @@ may have been communicated through other means.
#### _3-1-1-Grant Extension Form_
-- A new option should be added to the staff dashboard or relevant pages for granting extensions.
-- The form should include fields for selecting the student, entering the extension duration, and
- adding any relevant notes.
+- A new option should be added to the staff dashboard or relevant pages for
+ granting extensions.
+- The form should include fields for selecting the student, entering the
+ extension duration, and adding any relevant notes.
- The reason for the extension to be granted.
- The medium the extension was requested (if not formal).
- An interface to search for and select students should be provided.
#### _3-1-2-Notifications_
-- Students should receive notifications via email or within the system when a staff member grants an
- extension.
-- Notifications should include details about the granted extension and its duration.
+- Students should receive notifications via email or within the system when a
+ staff member grants an extension.
+- Notifications should include details about the granted extension and its
+ duration.
### 3-2-Backend Functionality
#### _3-2-1-Extension Record_
-- An extension record should be created and associated with the student, the staff member initiating
- the extension, and the specified duration.
+- An extension record should be created and associated with the student, the
+ staff member initiating the extension, and the specified duration.
- Extension records should be viewable by both staff members and students.
## 4-Technical Requirements
### 4-1-Technology Stack
-The "Staff Grant Extension" feature should be implemented using the existing technology stack of the
-Doubtfire system.
+The "Staff Grant Extension" feature should be implemented using the existing
+technology stack of the Doubtfire system.
- Frontend: Angular and Tailwind CSS
- Backend: Ruby on Rails
### 4-2-Data Management
-- Extension records should be stored in the system's database, associated with the relevant student
- and staff member.
+- Extension records should be stored in the system's database, associated with
+ the relevant student and staff member.
### 4-3-User Authentication and Authorisation
-- Only authorised staff members should have access to the "Grant Extension" functionality.
+- Only authorised staff members should have access to the "Grant Extension"
+ functionality.
- Appropriate access controls should be implemented to ensure data security.
## 5-Non-Functional Requirements
### 5-1-Usability
-- The user interface for granting extensions should be intuitive and user-friendly, requiring
- minimal training for staff members.
+- The user interface for granting extensions should be intuitive and
+ user-friendly, requiring minimal training for staff members.
### 5-2-Performance
-- The feature should be responsive and provide a seamless experience for staff members, even during
- periods of high system usage.
+- The feature should be responsive and provide a seamless experience for staff
+ members, even during periods of high system usage.
## 6-Future Considerations
@@ -94,7 +100,8 @@ Doubtfire system.
#### _Test Case 1: Successful Extension Granting_
-Description: Verify that a staff member can successfully grant an extension to a student.
+Description: Verify that a staff member can successfully grant an extension to a
+student.
Steps:
@@ -104,38 +111,44 @@ Steps:
4. Enter a valid extension duration.
5. Submit the form.
-Expected Outcome: The extension is granted, and a new extension record is created in the database.
-The student receives a notification, and the staff member can view the granted extension details.
+Expected Outcome: The extension is granted, and a new extension record is
+created in the database. The student receives a notification, and the staff
+member can view the granted extension details.
#### _Test Case 2: Invalid Extension Duration_
-Description: Test the system's response when a staff member enters an invalid extension duration.
+Description: Test the system's response when a staff member enters an invalid
+extension duration.
Steps:
1. Authenticate as a staff member.
2. Access the "Grant Extension" functionality.
3. Select a student.
-4. Enter an invalid extension duration (e.g., a negative value or non-numeric input).
+4. Enter an invalid extension duration (e.g., a negative value or non-numeric
+ input).
5. Submit the form.
-Expected Outcome: The system displays an error message indicating that the entered duration is
-invalid. No extension record is created.
+Expected Outcome: The system displays an error message indicating that the
+entered duration is invalid. No extension record is created.
#### _Test Case 3: Unauthorised Access_
-Description: Verify that unauthorised users cannot access the "Grant Extension" functionality.
+Description: Verify that unauthorised users cannot access the "Grant Extension"
+functionality.
Steps:
-1. Attempt to access the "Grant Extension" functionality without proper authentication as a staff
- member.
+1. Attempt to access the "Grant Extension" functionality without proper
+ authentication as a staff member.
-Expected Outcome: The system denies access and displays an appropriate error message.
+Expected Outcome: The system denies access and displays an appropriate error
+message.
#### _Test Case 4: Notification Sent to Student_
-Description: Check if the student receives a notification when an extension is granted.
+Description: Check if the student receives a notification when an extension is
+granted.
Steps:
@@ -143,8 +156,8 @@ Steps:
2. Grant an extension to a student.
3. Verify the student's notifications.
-Expected Outcome: The student receives a notification indicating the granted extension and its
-duration.
+Expected Outcome: The student receives a notification indicating the granted
+extension and its duration.
### 7-2-Running Tests and Interpreting Results
@@ -158,22 +171,27 @@ duration.
#### _7.2.2. Interpreting Results_
1. If all tests pass, you will see a success message(s).
-2. If any test fails, you will see a descriptive error message indicating the test that failed and
- the reason for failure.
+2. If any test fails, you will see a descriptive error message indicating the
+ test that failed and the reason for failure.
#### _7.2.3. Troubleshooting_
-1. If tests fail, review the error messages and stack traces to identify the issue.
-2. Check the backend code related to the failing test to diagnose and fix the problem.
-3. Rerun the tests after making changes to verify that the issue has been resolved.
+1. If tests fail, review the error messages and stack traces to identify the
+ issue.
+2. Check the backend code related to the failing test to diagnose and fix the
+ problem.
+3. Rerun the tests after making changes to verify that the issue has been
+ resolved.
## 8-Conclusion
-The "Staff Grant Extension" feature enhances the flexibility and responsiveness of the OnTrack
-learning management system by allowing staff members to grant extensions to students based on
-individual circumstances. This document outlines the functional, technical, and non-functional
-requirements for the successful implementation of this feature. Thorough testing of the backend
-extension granting endpoint ensures that the "Staff Grant Extension" feature functions as expected.
-The test cases cover various scenarios, including successful extension granting, error handling, and
-notifications. Running the tests and interpreting the results helps identify and address issues
-before deploying the feature to production.
+The "Staff Grant Extension" feature enhances the flexibility and responsiveness
+of the OnTrack learning management system by allowing staff members to grant
+extensions to students based on individual circumstances. This document outlines
+the functional, technical, and non-functional requirements for the successful
+implementation of this feature. Thorough testing of the backend extension
+granting endpoint ensures that the "Staff Grant Extension" feature functions as
+expected. The test cases cover various scenarios, including successful extension
+granting, error handling, and notifications. Running the tests and interpreting
+the results helps identify and address issues before deploying the feature to
+production.
diff --git a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/2022-t3-hand-over-document.md b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/2022-t3-hand-over-document.md
index 8aed313df..f6ed9dd05 100644
--- a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/2022-t3-hand-over-document.md
+++ b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/2022-t3-hand-over-document.md
@@ -6,58 +6,68 @@ title: 2022 T3 Hand-Over Document
## Purpose of Document
-The purpose of this document is to explain to future collaborators of this project what has been
-accomplished. This document is a guide for future collaborators, on what their next course of action
-ought to be.
+The purpose of this document is to explain to future collaborators of this
+project what has been accomplished. This document is a guide for future
+collaborators, on what their next course of action ought to be.
## State of Project When Received
- Several key stakeholders had been identified.
- Several features had been derived from the stakeholder personas.
- Two designs had been handed to us.
-- A so-called "back-end emulator" and "front-end simulator" was handed to us, which was supposed to
- be an educationally assistive technology.
+- A so-called "back-end emulator" and "front-end simulator" was handed to us,
+ which was supposed to be an educationally assistive technology.
## State of Project at Hand-Over
-- The so-called "back-end emulator" and "front-end simulator" were redefined as an API, the
- "`chathistorydisplayer-api`" application, and a web interface geared towards testing the API, the
- "`chathistorydisplayer-web`" application.
- - The `chathistorydisplayer-api` application is located in the directory called `emulator` in the
+- The so-called "back-end emulator" and "front-end simulator" were redefined as
+ an API, the "`chathistorydisplayer-api`" application, and a web interface
+ geared towards testing the API, the "`chathistorydisplayer-web`" application.
+ - The `chathistorydisplayer-api` application is located in the directory
+ called `emulator` in the
[thoth-tech/ChatHistoryDisplayer](https://github.com/thoth-tech/ChatHistoryDisplayer)
repository.
- - The `chathistorydisplayer-web` application is located in the directory called
- `frontEndSimulator` in the
+ - The `chathistorydisplayer-web` application is located in the directory
+ called `frontEndSimulator` in the
[thoth-tech/ChatHistoryDisplayer](https://github.com/thoth-tech/ChatHistoryDisplayer)
repository.
-- The `chathistorydisplayer-api` application had its containerisation refactored.
+- The `chathistorydisplayer-api` application had its containerisation
+ refactored.
- The `chathistorydisplayer-web` application was containerised.
- `Docker Compose` was integrated and configured to handle spinning up both the
`chathistorydisplayer-api` and `chathistorydisplayer-web` applications.
-- Quality of life features were integrated into the `chathistorydisplayer-api` application. Namely,
- a static code analyser and linter (`rubocop`) and a testing suite (`RSpec` and `Capybara`).
-- 83 offenses in the `chathistorydisplayer-api`, as detected by the newly integrated static code
- analyser, were fixed manually.
-- The `chathistorydisplayer-api` application was altered to facilitate the creation of user
- directories, project directories, and write files from JSON payloads. In comparison, it formerly
- only created user directories and initialised those are git repositories. The back-end team deemed
- it appropriate to change this, so that each project is handled as a git repository; this will
- allow each project to have its history queries for integration into a chat interface.
-- An API end-point was created in the `chathistorydisplayer-api` application to fetch the most
- recent `git diff` of a file.
-- API end-points were created in the `chathistorydisplayer-api` application to handle the deletion
- of user directories, project directories, and files in project directories.
-- A diagram, which acts as a proposition, was created on how the [thoth-tech/ChatHistoryDisplayer]
+- Quality of life features were integrated into the `chathistorydisplayer-api`
+ application. Namely, a static code analyser and linter (`rubocop`) and a
+ testing suite (`RSpec` and `Capybara`).
+- 83 offenses in the `chathistorydisplayer-api`, as detected by the newly
+ integrated static code analyser, were fixed manually.
+- The `chathistorydisplayer-api` application was altered to facilitate the
+ creation of user directories, project directories, and write files from JSON
+ payloads. In comparison, it formerly only created user directories and
+ initialised those are git repositories. The back-end team deemed it
+ appropriate to change this, so that each project is handled as a git
+ repository; this will allow each project to have its history queries for
+ integration into a chat interface.
+- An API end-point was created in the `chathistorydisplayer-api` application to
+ fetch the most recent `git diff` of a file.
+- API end-points were created in the `chathistorydisplayer-api` application to
+ handle the deletion of user directories, project directories, and files in
+ project directories.
+- A diagram, which acts as a proposition, was created on how the
+ [thoth-tech/ChatHistoryDisplayer]
may be integrated into
[thoth-tech/doubtfire-api](https://github.com/thoth-tech/doubtfire-api).
-- The `chathistorydisplayer-web` application had a React component library integrated and it was
- then leveraged. This resulted in a visual overhaul of the web application.
-- The `chathistorydisplayer-web` application had visual buttons created for the deletion of user
- directories, project directories, and files in project directories.
-- The `chathistorydisplayer-web` application had `Javascript` events integrated into the text input
- fields, so that it would be clearer what variables were set to during testing.
-- The `chathistorydisplayer-web` application had `Javascript` events integrated into the buttons, so
- that appropriate API end-points were called.
+- The `chathistorydisplayer-web` application had a React component library
+ integrated and it was then leveraged. This resulted in a visual overhaul of
+ the web application.
+- The `chathistorydisplayer-web` application had visual buttons created for the
+ deletion of user directories, project directories, and files in project
+ directories.
+- The `chathistorydisplayer-web` application had `Javascript` events integrated
+ into the text input fields, so that it would be clearer what variables were
+ set to during testing.
+- The `chathistorydisplayer-web` application had `Javascript` events integrated
+ into the buttons, so that appropriate API end-points were called.
## What Next?
diff --git a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/2023-t1-hand-over-document.md b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/2023-t1-hand-over-document.md
index 9621c8ec7..8db438aed 100644
--- a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/2023-t1-hand-over-document.md
+++ b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/2023-t1-hand-over-document.md
@@ -6,36 +6,41 @@ title: 2023 T1 Hand-Over Document
## Purpose of Document
-This document aims to inform potential collaborators about the project progress and accomplishments.
-It guides collaborators by outlining the next steps and actions they should take. The document
-ensures project continuity and coherence by detailing previous work. It helps newcollaborators
-understand the project status and build on it. This document also helps future collaborators make
-strategic and informed decisions. It helps themidentify gaps, challenges, and opportunities and
-suggests next steps. It streamlines projectefforts, and empowers future collaborators to effectively
-contribute to project success byleveraging past successes and experiences.
+This document aims to inform potential collaborators about the project progress
+and accomplishments. It guides collaborators by outlining the next steps and
+actions they should take. The document ensures project continuity and coherence
+by detailing previous work. It helps newcollaborators understand the project
+status and build on it. This document also helps future collaborators make
+strategic and informed decisions. It helps themidentify gaps, challenges, and
+opportunities and suggests next steps. It streamlines projectefforts, and
+empowers future collaborators to effectively contribute to project success
+byleveraging past successes and experiences.
## Project Overview, Goals, and Objectives
-The Task Submission and Redesign Project, which is a component of the Ontrack Project, has clear
-goals and objectives aimed at enhancing the functionality and efficiency of the existing product.
-The primary objective of the project is to modify the current system in a way that allows each
-submitted artifact to be easily displayed and interpreted by users.
-
-The project also helps markers inspect these artefacts, who evaluate and provide feedback. The
-project aims to speed up marking by helping markers quickly evaluate artefacts, and hence improve
-evaluation efficiency. The Task Submission and Redesign Project aims to streamline marking and add a
-chatbot. The chatbot will mediate marker-student activities. The chatbot may help, answer questions,
+The Task Submission and Redesign Project, which is a component of the Ontrack
+Project, has clear goals and objectives aimed at enhancing the functionality and
+efficiency of the existing product. The primary objective of the project is to
+modify the current system in a way that allows each submitted artifact to be
+easily displayed and interpreted by users.
+
+The project also helps markers inspect these artefacts, who evaluate and provide
+feedback. The project aims to speed up marking by helping markers quickly
+evaluate artefacts, and hence improve evaluation efficiency. The Task Submission
+and Redesign Project aims to streamline marking and add a chatbot. The chatbot
+will mediate marker-student activities. The chatbot may help, answer questions,
guide, and facilitate communication.
-Overall, the Task Submission and Redesign Project optimizes artefact submission and evaluation. The
-project aims to improve user experience, efficiency, and collaboration between markers and students
-in the Ontrack Project by introducing different submitted artefact display, efficient marking
-procedures, and a chatbot.
+Overall, the Task Submission and Redesign Project optimizes artefact submission
+and evaluation. The project aims to improve user experience, efficiency, and
+collaboration between markers and students in the Ontrack Project by introducing
+different submitted artefact display, efficient marking procedures, and a
+chatbot.
## Project Deliverables
-This sections outlines project deliverables for 2023 T1. For overall project deliverables, please
-check the markdown docment in
+This sections outlines project deliverables for 2023 T1. For overall project
+deliverables, please check the markdown docment in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/blob/main/docs/OnTrack/Task%20Submission%20%26%20Redesign/Deliverables.md).
### Short-term (2023 T1)
@@ -62,7 +67,8 @@ check the markdown docment in
- A modification to a markdown document in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign).
-- [x] Update the project Scope Sign Off Document (to reflect scope changes relevant to T1/2023).
+- [x] Update the project Scope Sign Off Document (to reflect scope changes
+ relevant to T1/2023).
- A modification to a markdown document in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign).
@@ -71,95 +77,113 @@ check the markdown docment in
#### Design
- [x] Create frame-by-frame flows of tutors using the primary design.
- - Multiple images and a video showcase, as output from [Figma] ,in
+ - Multiple images and a video showcase, as output from [Figma]
+ ,in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign/design_images),
- - Additional information: These flows should determine whether an alteration to the single,
- primary design is required and what specific alteration is required. This could be broken down
- into tasks regarding specific flows for showing the use of specific features.
+ - Additional information: These flows should determine whether an alteration
+ to the single, primary design is required and what specific alteration is
+ required. This could be broken down into tasks regarding specific flows for
+ showing the use of specific features.
-- [x] Create `TaskSubmissionEnhancement` new Features to the student-view design.
+- [x] Create `TaskSubmissionEnhancement` new Features to the student-view
+ design.
- Multiple images and a video showcase in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign/design_images),
as output from [Figma](https://www.figma.com/).
- - A markdown document that explains the functions and implementation of the new features in
+ - A markdown document that explains the functions and implementation of the
+ new features in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign).
#### Code
-- [x] Create `Submission enhancement test environment` for the new features on Student View.
- - A source code of the test environment, demonstration video to showcase the new features and how
- it could present in Ontrack with additional documentation in
+- [x] Create `Submission enhancement test environment` for the new features on
+ Student View.
+ - A source code of the test environment, demonstration video to showcase the
+ new features and how it could present in Ontrack with additional
+ documentation in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign).
### Cyber-security Oriented
-- [x] Create a document that outlines the cybersecurity concerns of the current changes.
+- [x] Create a document that outlines the cybersecurity concerns of the current
+ changes.
- A markdown document in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign).
-- [x] Create a document that introduce administrators to potential cyber security threats or issues.
+- [x] Create a document that introduce administrators to potential cyber
+ security threats or issues.
- A markdown document (or multiple) in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign).
-- [x] Create a code script of malware-detection software to implement for the new feature.
+- [x] Create a code script of malware-detection software to implement for the
+ new feature.
- A markdown document in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign).
## State of Project When Received
-- Several key stakeholders had been identified and several user stories and features had been
- derived from the stakeholder personas.
+- Several key stakeholders had been identified and several user stories and
+ features had been derived from the stakeholder personas.
- Two designs (Student and Tutor) had been found in Figma.
-- The "`chathistorydisplayer-api`" and "`chathistorydisplayer-web`" applications were redefined as
- an API and a web interface geared towards testing the API.
-- The "`chathistorydisplayer-api`" application had its containerisation refactored, the
- "`chathistorydisplayer-web`" application was containerised, Docker Compose was integrated, quality
- of life features were integrated, and 83 offenses were fixed manually.
-- The "`chathistorydisplayer-api`" application was altered to facilitate the creation of user
- directories, project directories, and write files from JSON payloads. An API end-point was created
- to fetch the most recent git diff of a file, and API end-points were created to handle the
- deletion of user directories, project directories, and files in project directories.
-- The `chathistorydisplayer-api` application was located in the directory called `emulator` in the
- [thoth-tech/ChatHistoryDisplayer] repository.
-- The `chathistorydisplayer-web` application is located in the directory called `frontEndSimulator`
- in the [thoth-tech/ChatHistoryDisplayer](https://github.com/thoth-tech/ChatHistoryDisplayer)
+- The "`chathistorydisplayer-api`" and "`chathistorydisplayer-web`" applications
+ were redefined as an API and a web interface geared towards testing the API.
+- The "`chathistorydisplayer-api`" application had its containerisation
+ refactored, the "`chathistorydisplayer-web`" application was containerised,
+ Docker Compose was integrated, quality of life features were integrated, and
+ 83 offenses were fixed manually.
+- The "`chathistorydisplayer-api`" application was altered to facilitate the
+ creation of user directories, project directories, and write files from JSON
+ payloads. An API end-point was created to fetch the most recent git diff of a
+ file, and API end-points were created to handle the deletion of user
+ directories, project directories, and files in project directories.
+- The `chathistorydisplayer-api` application was located in the directory called
+ `emulator` in the [thoth-tech/ChatHistoryDisplayer]
+ repository.
+- The `chathistorydisplayer-web` application is located in the directory called
+ `frontEndSimulator` in the
+ [thoth-tech/ChatHistoryDisplayer](https://github.com/thoth-tech/ChatHistoryDisplayer)
repository.
-- More information can be found in the 2022 T3 Handover Document: A markdown document in
+- More information can be found in the 2022 T3 Handover Document: A markdown
+ document in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign).
## State of Project at Hand-Over
-- Frame-by-frame flows of tutors performing current and new proposed features using the primary
- design have been created.
-- The design enhancements, specifically the `TaskSubmissionEnhancement`, have been incorporated into
- the student-view design.
-- A test environment for `TaskSubmissionEnhancement` has been created for the new features on the
- Student View. The source code, a demonstration video showcasing the new features, and additional
- documentation are provided.
-- A document outlining the cyber-security concerns related to propositional changes has been
- created.
-- Documents introducing potential cyber-security threats or issues to OnTrack administrators have
- been prepared.
-- A code script for malware-detection software implementation for the new feature has been provided.
-- These completed deliverables demonstrate progress in different aspects of the project, including
- documentation updates, front-end design enhancements, code implementation, and consideration of
- cyber-security concerns. The project is now ready for handover, with comprehensive documentation
- and tangible outcomes that can serve as a foundation for future development and collaboration.
+- Frame-by-frame flows of tutors performing current and new proposed features
+ using the primary design have been created.
+- The design enhancements, specifically the `TaskSubmissionEnhancement`, have
+ been incorporated into the student-view design.
+- A test environment for `TaskSubmissionEnhancement` has been created for the
+ new features on the Student View. The source code, a demonstration video
+ showcasing the new features, and additional documentation are provided.
+- A document outlining the cyber-security concerns related to propositional
+ changes has been created.
+- Documents introducing potential cyber-security threats or issues to OnTrack
+ administrators have been prepared.
+- A code script for malware-detection software implementation for the new
+ feature has been provided.
+- These completed deliverables demonstrate progress in different aspects of the
+ project, including documentation updates, front-end design enhancements, code
+ implementation, and consideration of cyber-security concerns. The project is
+ now ready for handover, with comprehensive documentation and tangible outcomes
+ that can serve as a foundation for future development and collaboration.
## What Next?
-- Finalise Figma design for component student-views with client and UI enhancement team.
+- Finalise Figma design for component student-views with client and UI
+ enhancement team.
- Iterate on component source code and add ROR implementation if necessary.
- Implement submission enhancement features for production.
-- Expand scope features such as automated task stage changing; for exxample if task is in Task not
- yet started stage, upload of a file will automatically change it to Working on it.
-- Based on the student-views new components progress, work on integrating the new features into
- tutor-views after finalizing the Figma prototype for tutor-views with client to UI enhancement
- team.
-- Read 2023 T1 Project Weekly updates, Meeting Minutes and other documents in the project TEAMS
- channel.
+- Expand scope features such as automated task stage changing; for exxample if
+ task is in Task not yet started stage, upload of a file will automatically
+ change it to Working on it.
+- Based on the student-views new components progress, work on integrating the
+ new features into tutor-views after finalizing the Figma prototype for
+ tutor-views with client to UI enhancement team.
+- Read 2023 T1 Project Weekly updates, Meeting Minutes and other documents in
+ the project TEAMS channel.
- Read
[Project On-boarding](/products/ontrack/projects/task-submission-and-redesign/project-on-boarding)
diff --git a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/deliverables.md b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/deliverables.md
index a3e3546ef..e283ee150 100644
--- a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/deliverables.md
+++ b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/deliverables.md
@@ -4,29 +4,33 @@ title: Deliverable Items
## Purpose of this Document
-This document outlines the deliverable items the Task View and Submission Redesign project intends
-to deliver upon. Each trimester, this document is to be reassessed. All team members are expected to
-express their expertise by breaking down deliverable items into smaller, actionable tasks on a
-collaborative technology such as Trello.
+This document outlines the deliverable items the Task View and Submission
+Redesign project intends to deliver upon. Each trimester, this document is to be
+reassessed. All team members are expected to express their expertise by breaking
+down deliverable items into smaller, actionable tasks on a collaborative
+technology such as Trello.
## Structure of the Deliverable Items Document
-All deliverable items are grouped into roles, but team members are allowed (and encouraged) to
-operate outside of their selected roles.
+All deliverable items are grouped into roles, but team members are allowed (and
+encouraged) to operate outside of their selected roles.
All deliverable items have the common form:
- [ ] What needs to be done.
- What evidence needs to be produced to show this is completed or on-going.
- - (OPTIONAL) Pre-requisites: The pre-requisite deliverable items for this deliverable.
- - (OPTIONAL) Additional information: Any additional information that may inform a reader about the
+ - (OPTIONAL) Pre-requisites: The pre-requisite deliverable items for this
deliverable.
+ - (OPTIONAL) Additional information: Any additional information that may
+ inform a reader about the deliverable.
-These deliverable items should then be decomposed into constituting tasks, mediated by some
-collaborative technology (for example, [Trello](https://trello.com/)).
+These deliverable items should then be decomposed into constituting tasks,
+mediated by some collaborative technology (for example,
+[Trello](https://trello.com/)).
-All team members should participate in the decomposition of deliverable items. Team members are also
-encouraged to contribute ideas for deliverable items, as informed by their CLOs.
+All team members should participate in the decomposition of deliverable items.
+Team members are also encouraged to contribute ideas for deliverable items, as
+informed by their CLOs.
## Deliverable Items
@@ -36,55 +40,67 @@ encouraged to contribute ideas for deliverable items, as informed by their CLOs.
- A modification to a markdown document in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
-- [x] Modify/Update a document that outlines the deliverable items of the project.
- - A markdown document in [thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
+- [x] Modify/Update a document that outlines the deliverable items of the
+ project.
+ - A markdown document in
+ [thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
- [x] Create a T1/2023 hand-over document.
- - A markdown document in [thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
+ - A markdown document in
+ [thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
### Front-end Oriented
#### Design
- [ ] Create frame-by-frame flows of tutors performing current and new features.
- - Multiple images and a video showcase, as output from [Figma](https://www.figma.com/), in
- [thoth-tech/documentation](https://github.com/thoth-tech/documentation/), as output from
- [Figma](https://www.figma.com/).
+ - Multiple images and a video showcase, as output from
+ [Figma](https://www.figma.com/), in
+ [thoth-tech/documentation](https://github.com/thoth-tech/documentation/), as
+ output from [Figma](https://www.figma.com/).
- - Pre-requisite: A single, primary design must be selected for this to be followed through with.
+ - Pre-requisite: A single, primary design must be selected for this to be
+ followed through with.
-- [ ] Create frame-by-frame flows of students performing current and new features.
+- [ ] Create frame-by-frame flows of students performing current and new
+ features.
- Multiple images, as output from [Figma](https://www.figma.com/), in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
- - Pre-requisite: A single, primary design must be selected for this to be followed through with.
+ - Pre-requisite: A single, primary design must be selected for this to be
+ followed through with.
- - Additional information: These flows should determine whether an alteration to the single,
- primary design is required and what specific alteration is required. This could be broken down
- into tasks regarding specific flows for showing the use of specific features.
+ - Additional information: These flows should determine whether an alteration
+ to the single, primary design is required and what specific alteration is
+ required. This could be broken down into tasks regarding specific flows for
+ showing the use of specific features.
- [ ] Complete the tutor-view design on [Figma](https://www.figma.com/).
- An image, as output from [Figma](https://www.figma.com/), in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
- - Pre-requisite: The creation of all the flows of the single, primary tutor-view design.
+ - Pre-requisite: The creation of all the flows of the single, primary
+ tutor-view design.
- - Additional information: This deliverable item is completed once all changes, as informed by
- usability and smart default problems obtained from the construction of the flows, are fixed.
+ - Additional information: This deliverable item is completed once all changes,
+ as informed by usability and smart default problems obtained from the
+ construction of the flows, are fixed.
- [x] Complete the student-view design on [Figma](https://www.figma.com/).
- An image, as output from [Figma](https://www.figma.com/), in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/)
- - Pre-requisite: The creation of all the flows of the single, primary student-view design.
+ - Pre-requisite: The creation of all the flows of the single, primary
+ student-view design.
- - Additional information: This deliverable item is completed once all changes, as informed by
- usability and smart default problems obtained from the construction of the flows, are fixed.
+ - Additional information: This deliverable item is completed once all changes,
+ as informed by usability and smart default problems obtained from the
+ construction of the flows, are fixed.
- [ ] Create new `TaskSubmissionEnhancement` Features to the student-view
[Figma](https://www.figma.com/) design.
- - An image and video showcase, as output from [Figma](https://www.figma.com/), in
- [thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
+ - An image and video showcase, as output from [Figma](https://www.figma.com/),
+ in [thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
- [ ] Create an administrator-view [Figma](https://www.figma.com/) design.
- An image, as output from [Figma](https://www.figma.com/), in
@@ -94,29 +110,31 @@ encouraged to contribute ideas for deliverable items, as informed by their CLOs.
- An image, as output from [Figma](https://www.figma.com/), in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
-- [ ] Create frame-by-frame flows of administrators performing current and new features.
+- [ ] Create frame-by-frame flows of administrators performing current and new
+ features.
- Multiple images, as output from [Figma](https://www.figma.com/), in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
- - Pre-requisite: The administrator-view deliverable item from this deliverable document must be
- completed.
+ - Pre-requisite: The administrator-view deliverable item from this deliverable
+ document must be completed.
- - Additional information: This could be broken down into tasks regarding specific flows for
- showing the use of specific features.
+ - Additional information: This could be broken down into tasks regarding
+ specific flows for showing the use of specific features.
-- [ ] Create frame-by-frame flows of convenors performing current and new features.
+- [ ] Create frame-by-frame flows of convenors performing current and new
+ features.
- Multiple images, as output from [Figma](https://www.figma.com/), in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
- - Pre-requisite: The conventor-view deliverable item from this deliverable document must be
- completed.
+ - Pre-requisite: The conventor-view deliverable item from this deliverable
+ document must be completed.
- - Additional information: This could be broken down into tasks regarding specific flows for
- showing the use of specific features.
+ - Additional information: This could be broken down into tasks regarding
+ specific flows for showing the use of specific features.
#### Code
-- [ ] Modify the existing front-end implementation of OnTrack to conform with any of the completed
- designs.
+- [ ] Modify the existing front-end implementation of OnTrack to conform with
+ any of the completed designs.
- [x] Expand the `chathistorydisplayer-web` web application.
@@ -124,9 +142,10 @@ encouraged to contribute ideas for deliverable items, as informed by their CLOs.
#### Documentation
-- [x] Create design propositions on how the `ChatHistoryDisplayer` integrates with the OnTrack
- platform.
- - An image file in [thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
+- [x] Create design propositions on how the `ChatHistoryDisplayer` integrates
+ with the OnTrack platform.
+ - An image file in
+ [thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
#### `ChatHistoryDisplayer`
@@ -134,60 +153,71 @@ encouraged to contribute ideas for deliverable items, as informed by their CLOs.
- [x] Implement Docker at `chathistorydisplayer-web`.
-- [x] Integrate Docker Compose at the root of the [thoth-tech/ChatHistoryDisplayer] repository.
+- [x] Integrate Docker Compose at the root of the
+ [thoth-tech/ChatHistoryDisplayer] repository.
- [x] Add functionality to `chathistorydisplayer-api`: create user directories.
-- [x] Add functionality to `chathistorydisplayer-api`: create project directories in user
- directories.
- - Additional information: Project directories must be initialised as git repositories.
+- [x] Add functionality to `chathistorydisplayer-api`: create project
+ directories in user directories.
+ - Additional information: Project directories must be initialised as git
+ repositories.
-- [x] Add functionality to `chathistorydisplayer-api`: write file from JSON payload.
+- [x] Add functionality to `chathistorydisplayer-api`: write file from JSON
+ payload.
- Additional information: Pertains to text files.
-- [x] Add functionality to `chathistorydisplayer-api`: API end-point that retrieves the last
- `git diff` of a text file.
+- [x] Add functionality to `chathistorydisplayer-api`: API end-point that
+ retrieves the last `git diff` of a text file.
-- [ ] Add functionality to `chathistorydisplayer-api`: authorisation at API end-points.
+- [ ] Add functionality to `chathistorydisplayer-api`: authorisation at API
+ end-points.
-- [ ] Add functionality to `chathistorydisplayer-api`: version control of PDF documents using the
- `git gem`.
+- [ ] Add functionality to `chathistorydisplayer-api`: version control of PDF
+ documents using the `git gem`.
#### `TaskSubmissionEnhancement`
-- [ ] Create a prototype of `TaskSubmissionEnhancement` Component of the Ontrack platform that adds
- the following four features that would benefit both students and teaching staff:
+- [ ] Create a prototype of `TaskSubmissionEnhancement` Component of the Ontrack
+ platform that adds the following four features that would benefit both
+ students and teaching staff:
- The ability to submit files regardless of the task state.
- The ability to submit individual task files.
- - The ability to submit optional additional files outside of the task requirements.
+ - The ability to submit optional additional files outside of the task
+ requirements.
- The ability to observe task file upload differences.
#### `Doubtfire`
- [ ] Modify OnTrack to serve raw files, where appropriate.
- Additional information: This contributes towards the integration of the
- `chathistorydisplayer-api` into the OnTrack platform, as the OnTrack platform needs PDF
- processing removed and separate handling for different classes of files (text files and PDFs
- come to mind).
+ `chathistorydisplayer-api` into the OnTrack platform, as the OnTrack
+ platform needs PDF processing removed and separate handling for different
+ classes of files (text files and PDFs come to mind).
-- [ ] The integration of `chathistorydisplayer-api` into the Docker environment of the OnTrack
- platform.
+- [ ] The integration of `chathistorydisplayer-api` into the Docker environment
+ of the OnTrack platform.
- [ ] Integrate `TaskSubmissionEnhancement` into the OnTrack platform.
### Cyber-security Oriented
-- [ ] Create a document that outlines the cyber-security protocols for project group members.
- - A markdown document in [thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
+- [ ] Create a document that outlines the cyber-security protocols for project
+ group members.
+ - A markdown document in
+ [thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
-- [ ] Create a document, or documents, that introduce OnTrack administrators to potential
- cyber-security threats or issues.
+- [ ] Create a document, or documents, that introduce OnTrack administrators to
+ potential cyber-security threats or issues.
- A markdown document (or multiple) in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
-- [ ] Create a document outlining the security concerns of propositional changes (or current enacted
- changes).
- - A markdown document in [thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
+- [ ] Create a document outlining the security concerns of propositional changes
+ (or current enacted changes).
+ - A markdown document in
+ [thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
-- [ ] Create a document that surveys group member compliance with security protocols.
- - A markdown document in [thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
+- [ ] Create a document that surveys group member compliance with security
+ protocols.
+ - A markdown document in
+ [thoth-tech/documentation](https://github.com/thoth-tech/documentation/).
diff --git a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/epic.md b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/epic.md
index b83244ed1..2dbf891f0 100644
--- a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/epic.md
+++ b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/epic.md
@@ -14,8 +14,9 @@ title: Author Information
- Documentation Title: Epic Document
- Documentation Type: Technical
-- Documentation Information Summary: Critical links and resources; the background, context, and
- business value of the project; and the acceptance criteria.
+- Documentation Information Summary: Critical links and resources; the
+ background, context, and business value of the project; and the acceptance
+ criteria.
## Document Review Information
@@ -30,20 +31,21 @@ title: Author Information
---
-Trello: A web-based list-making application designed with a focus on teams that implement a scrum
-style of organisation.
+Trello: A web-based list-making application designed with a focus on teams that
+implement a scrum style of organisation.
Figma: A web-based application for user interface and user experience design.
-UI: User Interface; the means by which a human interacts with a machine, these are typically
-graphical interfaces that accept input from an end-user.
+UI: User Interface; the means by which a human interacts with a machine, these
+are typically graphical interfaces that accept input from an end-user.
-UX: User Experience; all aspects of the end-user's interactions with an application or device.
+UX: User Experience; all aspects of the end-user's interactions with an
+application or device.
-Flow: A frame-by-frame image of a user (a student, tutor, convenor, or administrator) performing a
-necessary function from beginning to end. Flows allow designers to think critically about how the
-usability of a design. It may save the project from investing time and resources into unusable
-dead-ends.
+Flow: A frame-by-frame image of a user (a student, tutor, convenor, or
+administrator) performing a necessary function from beginning to end. Flows
+allow designers to think critically about how the usability of a design. It may
+save the project from investing time and resources into unusable dead-ends.
## Key Links/Resources
@@ -61,7 +63,8 @@ dead-ends.
---
-see [Thoth Tech Handbook](https://github.com/thoth-tech/handbook/blob/main/README.md)
+see
+[Thoth Tech Handbook](https://github.com/thoth-tech/handbook/blob/main/README.md)
## Related Documents
@@ -73,17 +76,19 @@ see [Thoth Tech Handbook](https://github.com/thoth-tech/handbook/blob/main/READM
## Background / Context
-OnTrack is employed by multiple institutions as a learning management system. The View Task and
-Submission project intends to create user-centric modifications to existing features, addition of
-new features to the task submission and view in the OnTrack platform.
+OnTrack is employed by multiple institutions as a learning management system.
+The View Task and Submission project intends to create user-centric
+modifications to existing features, addition of new features to the task
+submission and view in the OnTrack platform.
## Business Value
-By further modernising OnTrack, institutions can deploy the OnTrack platform to satisfy the needs of
-their students, markers, assessors, and auditors. The platform can support all stakeholders to
-fulfil their obligations and, in the case of the student, support the learning of essential
-concepts. By streamlining the experience of markers, then associated costs may decrease.
-Additionally, the feedback loop for students (the learning feedback loop) may shorten.
+By further modernising OnTrack, institutions can deploy the OnTrack platform to
+satisfy the needs of their students, markers, assessors, and auditors. The
+platform can support all stakeholders to fulfil their obligations and, in the
+case of the student, support the learning of essential concepts. By streamlining
+the experience of markers, then associated costs may decrease. Additionally, the
+feedback loop for students (the learning feedback loop) may shorten.
## In Scope
@@ -121,7 +126,8 @@ Additionally, the feedback loop for students (the learning feedback loop) may sh
## Operations / Support / Training Considerations
-Team members may require training/up-skilling in applications, technologies, and languages, such as:
+Team members may require training/up-skilling in applications, technologies, and
+languages, such as:
- [git](https://git-scm.com/)
- [GitHub](https://github.com/),
@@ -137,13 +143,14 @@ Team members may require training/up-skilling in applications, technologies, and
- [React](https://reactjs.org/),
- [MUI](https://mui.com/).
-Team members must express testing skills by use of various testing tools to ensure functionalitie
-work as intended. They also must be able to fix and/or document and report on issues or bugs as they
-arise.
+Team members must express testing skills by use of various testing tools to
+ensure functionalitie work as intended. They also must be able to fix and/or
+document and report on issues or bugs as they arise.
## Acceptance Criteria
- Managing director must approve of the design before implementation.
-- If a change is required, then an alteration to the design (with approval from the managine
- director) must be completed first.
-- All code must be tested before an attempt to pull into the upstream repositories.
+- If a change is required, then an alteration to the design (with approval from
+ the managine director) must be completed first.
+- All code must be tested before an attempt to pull into the upstream
+ repositories.
diff --git a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/index.md b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/index.md
index 692b26dfc..0efedc311 100644
--- a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/index.md
+++ b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/index.md
@@ -8,7 +8,8 @@ A document created to reflect what has been so far accomplished in the project.
## [T1, 2023 Project Scope Sign Off Document](/products/ontrack/projects/task-submission-and-redesign/project-scope-signoff-document)
-A document created to reflect the scope of the project and its deliverables for T1 / 2023.
+A document created to reflect the scope of the project and its deliverables for
+T1 / 2023.
## [T2, 2022 Hand-over Document](/products/ontrack/projects/task-submission-and-redesign/2022-t3-hand-over-document)
@@ -16,8 +17,8 @@ A document created to be the first document read on the project.
## [Project On-boarding](/products/ontrack/projects/task-submission-and-redesign/project-on-boarding)
-A document created to assist with the on-boarding process of new contributors to the Task View and
-Submission Redesign project.
+A document created to assist with the on-boarding process of new contributors to
+the Task View and Submission Redesign project.
## [View Task and Submission Redesign Epic](/products/ontrack/projects/task-submission-and-redesign/epic)
diff --git a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/project-on-boarding.md b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/project-on-boarding.md
index 8639a6080..852c3b407 100644
--- a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/project-on-boarding.md
+++ b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/project-on-boarding.md
@@ -6,26 +6,31 @@ title: Project On-boarding
## Purpose of Document
-The purpose of this document is to position you, the potential contributor, such that you can
-contribute to the project. Regardless of your selected role, it is paramount that you join and
-configure all facets of the project. You are encouraged to work outside of your role.
+The purpose of this document is to position you, the potential contributor, such
+that you can contribute to the project. Regardless of your selected role, it is
+paramount that you join and configure all facets of the project. You are
+encouraged to work outside of your role.
## Notice
-All team members are expected to have all facets of the project set up, irrespective of your
-selected or designated roles. This increases the team agility.
+All team members are expected to have all facets of the project set up,
+irrespective of your selected or designated roles. This increases the team
+agility.
## Trello
- Register a [Trello](https://trello.com/signup) account.
-- Confirm your Trello account by email (may go to Trash, so be sure to check there).
-- Join the Trello board that is assigned by your delivery Lead. The previous team's
+- Confirm your Trello account by email (may go to Trash, so be sure to check
+ there).
+- Join the Trello board that is assigned by your delivery Lead. The previous
+ team's
[Trello board](https://trello.com/b/FWyBUYG8/task-view-re-design-team-ontrack-project)
## Figma
- Register a [Figma](https://www.figma.com/) account.
-- Confirm your Figma account by email (may go to Trash, so be sure to check there).
+- Confirm your Figma account by email (may go to Trash, so be sure to check
+ there).
- Join the
[Figma project](https://www.figma.com/files/project/61538483/Team-project?fuid=1226098815565608315).
@@ -33,15 +38,19 @@ selected or designated roles. This increases the team agility.
If you haven't already, you must configure git.
-1. Set your git username by `git config --global user.name "FIRST_NAME LAST_NAME"`,
- where`FIRST_NAME` is your first name and `LAST_NAME` is your last name.
-1. Set your git email by `git config --global user.email "YOUR_EMAIL"`, where `YOUR_EMAIL` is your
- email. It is advised that you use your `@users.noreply.github.com` email address, which is, by
- default, `@users.noreply.github.com`, where `` is your GitHub username.
+1. Set your git username by
+ `git config --global user.name "FIRST_NAME LAST_NAME"`, where`FIRST_NAME` is
+ your first name and `LAST_NAME` is your last name.
+1. Set your git email by `git config --global user.email "YOUR_EMAIL"`, where
+ `YOUR_EMAIL` is your email. It is advised that you use your
+ `@users.noreply.github.com` email address, which is, by default,
+ `@users.noreply.github.com`, where `` is your GitHub
+ username.
## Cloning the Documentation
-This enables you to contribute to the project documentation. You should also read the
+This enables you to contribute to the project documentation. You should also
+read the
[documentation contribution guidelines](https://github.com/thoth-tech/documentation/blob/main/CONTRIBUTING.md).
```shell
@@ -54,14 +63,16 @@ If you are on a Windows machine, then we recommend that you install WSL2.
## Get OnTrack Running on Local Machine
-You need a terminal that supports shell scripts (on Windows, you need WSL2, Msys2, or Cygwin).
+You need a terminal that supports shell scripts (on Windows, you need WSL2,
+Msys2, or Cygwin).
1. Fork [doubtfire-deploy](https://github.com/doubtfire-lms/doubtfire-deploy),
[doubtfire-api](https://github.com/doubtfire-lms/doubtfire-api), and
[doubtfire-web](https://github.com/doubtfire-lms/doubtfire-web)
-2. Clone your [doubtfire-deploy](https://github.com/doubtfire-lms/doubtfire-deploy). Make sure to
- fetch submodules to get the sub-projects.
+2. Clone your
+ [doubtfire-deploy](https://github.com/doubtfire-lms/doubtfire-deploy). Make
+ sure to fetch submodules to get the sub-projects.
```shell
git clone --recurse-submodules https://github.com/YOUR_USERNAME/doubtfire-deploy
@@ -73,22 +84,23 @@ You need a terminal that supports shell scripts (on Windows, you need WSL2, Msys
cd doubtfire-deploy
```
-4. Open a terminal that supports `sh` scripts (on Windows, you require WSL2, Msys2, or Cygwin). Run
- the following command to set your fork as the remote.
+4. Open a terminal that supports `sh` scripts (on Windows, you require WSL2,
+ Msys2, or Cygwin). Run the following command to set your fork as the remote.
```shell
./change_remotes.sh
```
-5. Your delivery lead provides you with the GitHub username to use in this command. This allows you
- to use `git fetch task-view-submission`, `git pull task-view-submission`, and
- `git push task-view-submission`.
+5. Your delivery lead provides you with the GitHub username to use in this
+ command. This allows you to use `git fetch task-view-submission`,
+ `git pull task-view-submission`, and `git push task-view-submission`.
```shell
git remote add task-view-submission https://github.com/PROVIDED_USERNAME/doubtfire-deploy
```
-6. You can now follow the remaining instructions, from instruction four, in the `doubtfire-deploy`
+6. You can now follow the remaining instructions, from instruction four, in the
+ `doubtfire-deploy`
[contributing file](https://github.com/doubtfire-lms/doubtfire-deploy/blob/development/CONTRIBUTING.md#working-with-docker-compose).
## What Next?
@@ -97,22 +109,25 @@ You need a terminal that supports shell scripts (on Windows, you need WSL2, Msys
[the project epic](/products/ontrack/projects/task-submission-and-redesign/epic)
- Become familiar with
[the user stories and features](/products/ontrack/projects/task-submission-and-redesign/user-stories-and-features)
- - Are there any users that are not served in the user stories or by the features?
+ - Are there any users that are not served in the user stories or by the
+ features?
- Become familiar with
[the requirements](/products/ontrack/projects/task-submission-and-redesign/requirements)
- - Are all stakeholders sufficiently provided for, with this set of requirements?
+ - Are all stakeholders sufficiently provided for, with this set of
+ requirements?
- Become familiar with
[the deliverables](/products/ontrack/projects/task-submission-and-redesign/deliverables)
- Are there deliverable items that should be added?
- Are there deliverable items that should be removed?
- - Are there deliverable items that can be decomposed into smaller deliverable items?
-- Examine the first proposed design below and ensure all requirements are met, and that the
- inclusion of the features are user-friendly.
+ - Are there deliverable items that can be decomposed into smaller deliverable
+ items?
+- Examine the first proposed design below and ensure all requirements are met,
+ and that the inclusion of the features are user-friendly.

-- Examine the second proposed design and ensure all requirements are met, and that the inclusion of
- the features are user-friendly.
+- Examine the second proposed design and ensure all requirements are met, and
+ that the inclusion of the features are user-friendly.

- Examine
@@ -123,15 +138,15 @@ You need a terminal that supports shell scripts (on Windows, you need WSL2, Msys
- Has it successfully passed proof-of-concept?
- How can git be implemented on the back-end of the OnTrack product?
- Work on implementing the front-end and back-end.
-- If somebody on your team is well-versed in cyber-security, then an examination of the security of
- the implementation is required.
-- If somebody on the team is well-versed in databases and database administration, then a model of
- the database is required.
+- If somebody on your team is well-versed in cyber-security, then an examination
+ of the security of the implementation is required.
+- If somebody on the team is well-versed in databases and database
+ administration, then a model of the database is required.
## Helpful Points
-- If you are using Windows as your primary operating system and you have not downloaded, installed,
- and/or set-up MinGW, then a former team found the Linux subsystem
- [WSL 2](https://docs.microsoft.com/en-us/windows/wsl/install) and
- [Docker Desktop WSL 2 backend](https://docs.docker.com/desktop/windows/wsl/) as a helpful
- development environment.
+- If you are using Windows as your primary operating system and you have not
+ downloaded, installed, and/or set-up MinGW, then a former team found the Linux
+ subsystem [WSL 2](https://docs.microsoft.com/en-us/windows/wsl/install) and
+ [Docker Desktop WSL 2 backend](https://docs.docker.com/desktop/windows/wsl/)
+ as a helpful development environment.
diff --git a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/project-scope-signoff-document.md b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/project-scope-signoff-document.md
index b01ba21d6..51e8eb784 100644
--- a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/project-scope-signoff-document.md
+++ b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/project-scope-signoff-document.md
@@ -11,28 +11,31 @@ title: Project Scope SignOff Document
## Purpose of Document
-The purpose of this document is to give an overiew of the scope of the project at the kick-off and
-verify with the client the project scope for T1 2023. It includes client-approved deliverables, and
-acceptance criteria.
+The purpose of this document is to give an overiew of the scope of the project
+at the kick-off and verify with the client the project scope for T1 2023. It
+includes client-approved deliverables, and acceptance criteria.
## State of Project When Received
- Several key stakeholders had been identified.
- Several features had been derived from the stakeholder personas.
- Two designs and one design Prototype had been handed.
-- Enhanced ChatHistoryDisplayer: Implemented MUI, Docker Integration. The ChatHistoryDisplayer
- consists of two parts: the server, which is being developed as an API, and the front-end, which
- tests the API. The server is situated in the emulator directory, while the front-end is situated
- in the frontEndSimulator directory. The mission of the API is to be integrated into the OnTrack
- platform.
+- Enhanced ChatHistoryDisplayer: Implemented MUI, Docker Integration. The
+ ChatHistoryDisplayer consists of two parts: the server, which is being
+ developed as an API, and the front-end, which tests the API. The server is
+ situated in the emulator directory, while the front-end is situated in the
+ frontEndSimulator directory. The mission of the API is to be integrated into
+ the OnTrack platform.
## Deliverables Verification for T1 2023
-The following items will be completed to verify that the project scope has been met:
+The following items will be completed to verify that the project scope has been
+met:
### Purely Documentation Oriented
-- [x] Modify the project epic and other related documents (make it relevant to T1/2023).
+- [x] Modify the project epic and other related documents (make it relevant to
+ T1/2023).
- A modification to a markdown document in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign).
@@ -49,44 +52,54 @@ The following items will be completed to verify that the project scope has been
#### Design
- [x] Create frame-by-frame flows of tutors using the primary design.
- - Multiple images and a video showcase, as output from [Figma](https://www.figma.com/), in
+ - Multiple images and a video showcase, as output from
+ [Figma](https://www.figma.com/), in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign/design_images),
- - Additional information: These flows should determine whether an alteration to the single,
- primary design is required and what specific alteration is required. This could be broken down
- into tasks regarding specific flows for showing the use of specific features.
+ - Additional information: These flows should determine whether an alteration
+ to the single, primary design is required and what specific alteration is
+ required. This could be broken down into tasks regarding specific flows for
+ showing the use of specific features.
-- [x] Create `TaskSubmissionEnhancement` new Features to the student-view design.
+- [x] Create `TaskSubmissionEnhancement` new Features to the student-view
+ design.
- Multiple images and a video showcase in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign/design_images),
as output from [Figma](https://www.figma.com/).
- - A markdown document that explains the functions and implementation of the new features in
+ - A markdown document that explains the functions and implementation of the
+ new features in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign).
### Back-end Oriented
#### `TaskSubmissionEnhancement`
-- [x] Create a prototype of `TaskSubmissionEnhancement` Component of the Ontrack platform that adds:
+- [x] Create a prototype of `TaskSubmissionEnhancement` Component of the Ontrack
+ platform that adds:
- The ability to submit files regardless of the task state.
- The ability to submit individual task files.
- - The ability to submit optional additional files outside of the task requirements.
+ - The ability to submit optional additional files outside of the task
+ requirements.
- The ability to observe task file upload differences.
-- [x] Create `Submission enhancement test environment` for the new features on Student View.
+- [x] Create `Submission enhancement test environment` for the new features on
+ Student View.
### Cyber-security Oriented
-- [x] Create a document that outlines the cybersecurity concerns of the current changes.
+- [x] Create a document that outlines the cybersecurity concerns of the current
+ changes.
- A markdown document in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign).
-- [x] Create a document that introduce administrators to potential cyber security threats or issues.
+- [x] Create a document that introduce administrators to potential cyber
+ security threats or issues.
- A markdown document (or multiple) in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign).
-- [x] Create a code script of malware-detection software to implement for the new feature.
+- [x] Create a code script of malware-detection software to implement for the
+ new feature.
- A markdown document in
[thoth-tech/documentation](https://github.com/thoth-tech/documentation/tree/main/docs/OnTrack/Task%20Submission%20%26%20Redesign).
diff --git a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/requirements.md b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/requirements.md
index 066735461..533a000d9 100644
--- a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/requirements.md
+++ b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/requirements.md
@@ -26,41 +26,43 @@ A chat bot that takes on existing features and mediates new features:
### Require tutor interaction
-When a new upload event occurs, the tutor is required to send a substantive message to their
-student. This can be communicated to the tutor via the [chat bot](#chat-bot).
+When a new upload event occurs, the tutor is required to send a substantive
+message to their student. This can be communicated to the tutor via the
+[chat bot](#chat-bot).
### Time-based log
-Displaying student/tutor and teacher events in a time-based log with the ability to scroll back and
-view previous events. This uses a git-based ruby backend to store submission files. There is a
-repository created for each individual submission task.
+Displaying student/tutor and teacher events in a time-based log with the ability
+to scroll back and view previous events. This uses a git-based ruby backend to
+store submission files. There is a repository created for each individual
+submission task.
### Stages for tasks
-The implementation of stages as extra resources for students who require more resources and
-confidence. This enables students to tackle the task in different ways to help support their
-learning.
+The implementation of stages as extra resources for students who require more
+resources and confidence. This enables students to tackle the task in different
+ways to help support their learning.
### Commit system
-The implementation of a commit system for tasks, enabling a set of mandatory core files to be
-uploaded in the form of a commit.
+The implementation of a commit system for tasks, enabling a set of mandatory
+core files to be uploaded in the form of a commit.
### File replacer
-The ability for students, tutors/teachers to view submitted files in their browser as well as write
-and view submitted comments regarding those files. The file replacer is supported by the uplifted
-file management system.
+The ability for students, tutors/teachers to view submitted files in their
+browser as well as write and view submitted comments regarding those files. The
+file replacer is supported by the uplifted file management system.
### Diff viewer
-The ability for tutors to be able to compare code files submitted by students via a difference
-viewer.
+The ability for tutors to be able to compare code files submitted by students
+via a difference viewer.
### Testing environment
-An emulation of the back-end is required as a proof-of-concept. The emulation also serves as an
-education piece for future collaborators.
+An emulation of the back-end is required as a proof-of-concept. The emulation
+also serves as an education piece for future collaborators.
## Resources
diff --git a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/submission-enhancement-overview-doc.md b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/submission-enhancement-overview-doc.md
index 50477deee..3c742583b 100644
--- a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/submission-enhancement-overview-doc.md
+++ b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/submission-enhancement-overview-doc.md
@@ -4,80 +4,89 @@ title: Submission Enhancement Overview Document
## Component Overview
-During Week 5 of the trimester, the team was able to secure a meeting with the client Andrew Cain
-who suggested a pivoted focus towards enhancement for the Task Submission Component of the Ontrack
-platform. The client outlined four features that he believes would benefit both students and
-teaching staff. The features include:
+During Week 5 of the trimester, the team was able to secure a meeting with the
+client Andrew Cain who suggested a pivoted focus towards enhancement for the
+Task Submission Component of the Ontrack platform. The client outlined four
+features that he believes would benefit both students and teaching staff. The
+features include:
- The ability to submit files regardless of the task state.
- The ability to submit individual task files.
-- The ability to submit optional additional files outside of the task requirements.
+- The ability to submit optional additional files outside of the task
+ requirements.
- The ability to observe task file upload differences.
## Feature: The ability to submit files, regardless of the task state
### Feature 1 Current Implementation
-Currently, the Ontrack platform only allows students to submit their files when the task state is
-changed to ‘Ready for Feedback’.
+Currently, the Ontrack platform only allows students to submit their files when
+the task state is changed to ‘Ready for Feedback’.
### Feature 1 Proposal
-Presented as a new button within the task card, the enhancement would allow for students to submit
-their task files regardless of the task state (‘Not Started’, ‘Working On It’, ‘Need Help’ and
-‘Ready for Feedback’). Future iterations of this feature could include automated task state changing
-depending on conditional statements. E.g. Task remain as ‘Not Started’ until a file is uploaded
-where it is then changed to ‘Working on It’ and then automatically changed to ’Ready for Feedback’
-when all files are uploaded.
+Presented as a new button within the task card, the enhancement would allow for
+students to submit their task files regardless of the task state (‘Not Started’,
+‘Working On It’, ‘Need Help’ and ‘Ready for Feedback’). Future iterations of
+this feature could include automated task state changing depending on
+conditional statements. E.g. Task remain as ‘Not Started’ until a file is
+uploaded where it is then changed to ‘Working on It’ and then automatically
+changed to ’Ready for Feedback’ when all files are uploaded.
### Feature 1 Value
-In conjunction with the ability to submit individual task files, teaching staff will be able to
-observe the progression of a student through the task. The ability to submit files during any stage
-would allow for students to request help from the teaching staff for already submitted files so
-discussion can be more targeted to the submissions in question.
+In conjunction with the ability to submit individual task files, teaching staff
+will be able to observe the progression of a student through the task. The
+ability to submit files during any stage would allow for students to request
+help from the teaching staff for already submitted files so discussion can be
+more targeted to the submissions in question.
## Feature: The ability to submit individual task files
### Feature 2 Current Implementation
-Currently, the Ontrack platform requires students to submit all the required task files, and in a
-specific order, when completing the tasks.
+Currently, the Ontrack platform requires students to submit all the required
+task files, and in a specific order, when completing the tasks.
### Feature 2 Proposal
-Implemented alongside the ability to submit task files, regardless of the task state, the
-enhancement will be present as a new upload dialog in which files can be upload in any order and
-won’t require all files to be uploaded at once. Future iterations of this feature could include
-individual task states (‘Working On It’, ‘Need Help’, and ‘Completed’) for each uploaded task.
+Implemented alongside the ability to submit task files, regardless of the task
+state, the enhancement will be present as a new upload dialog in which files can
+be upload in any order and won’t require all files to be uploaded at once.
+Future iterations of this feature could include individual task states (‘Working
+On It’, ‘Need Help’, and ‘Completed’) for each uploaded task.
### Feature 2 Value
-The ability to submit individual files will benefit students by allowing them to submit portions of
-their task. For tasks that require multiple files to be submitted for completion, this means that
-students can submit their files a number of times as a form of version control, minimising the
-potential for file loss if technical difficulties occur.
+The ability to submit individual files will benefit students by allowing them to
+submit portions of their task. For tasks that require multiple files to be
+submitted for completion, this means that students can submit their files a
+number of times as a form of version control, minimising the potential for file
+loss if technical difficulties occur.
## Feature: The ability to submit optional additional outside of the task requirements
### Feature 3 Current Implementation
-Currently, the Ontrack platform does not allow for the upload of additional files outside of the
-comment section for attachments. Students are only able to submit the required task files.
+Currently, the Ontrack platform does not allow for the upload of additional
+files outside of the comment section for attachments. Students are only able to
+submit the required task files.
### Feature 3 Proposal
-Implemented within the ability to submit individual files and in conjunction with the added freedom
-to upload tasks in any order, the feature will present as a new submission item alongside the
-required files, allowing students to submit files that are outside of the task’s requirements.
+Implemented within the ability to submit individual files and in conjunction
+with the added freedom to upload tasks in any order, the feature will present as
+a new submission item alongside the required files, allowing students to submit
+files that are outside of the task’s requirements.
### Feature 3 Value
-The feature will add value to both students and teaching staff as the students will be able to
-submit files that they believe to be complementary to the task (e.g. Learning summaries, output
-files, etc). Teaching staff will also be able to request additional files (expanded explanations,
-output files, additional tasks) from the students during feedback, without having the need to use
-the comment section’s attachments.
+The feature will add value to both students and teaching staff as the students
+will be able to submit files that they believe to be complementary to the task
+(e.g. Learning summaries, output files, etc). Teaching staff will also be able
+to request additional files (expanded explanations, output files, additional
+tasks) from the students during feedback, without having the need to use the
+comment section’s attachments.
## Feature: The ability to observe task file upload differences
@@ -87,18 +96,19 @@ Currently, the Ontrack platform does not support upload file diff-viewing.
### Feature 4 Proposal
-The feature will be presented as a new tab or button or button to open a new display where task
-submission files can be viewed as a side-by-side view with differences highlighted, akin to GitHub
-pull-requests.
+The feature will be presented as a new tab or button or button to open a new
+display where task submission files can be viewed as a side-by-side view with
+differences highlighted, akin to GitHub pull-requests.
### Feature 4 Value
-Primarily of value to the teaching staff, tasks that may have been marked as ‘Fix/Resubmit’ will be
-able to be compared to their resubmitted task file. This will allow for easier identification of the
-changes made, ensuring that appropriate fixes have been made by the student without the need to
-review the entire upload.
+Primarily of value to the teaching staff, tasks that may have been marked as
+‘Fix/Resubmit’ will be able to be compared to their resubmitted task file. This
+will allow for easier identification of the changes made, ensuring that
+appropriate fixes have been made by the student without the need to review the
+entire upload.
## Additional Notes
-No design choices have been finalised and should be iterated upon with input from the client and the
-UI Enhancement team.
+No design choices have been finalised and should be iterated upon with input
+from the client and the UI Enhancement team.
diff --git a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/user-stories-and-features.md b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/user-stories-and-features.md
index fa65adf50..26a429972 100644
--- a/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/user-stories-and-features.md
+++ b/src/content/docs/Products/OnTrack/Projects/Task Submission and Redesign/user-stories-and-features.md
@@ -4,8 +4,8 @@ title: User Stories and Features
[Back to index](/products/ontrack/projects/task-submission-and-redesign)
-The personas, user stories, and features (as derived from the user stories) for the Task View and
-Submission Redesign project.
+The personas, user stories, and features (as derived from the user stories) for
+the Task View and Submission Redesign project.
## Identified Personas
@@ -20,42 +20,45 @@ In the form, "As a \[persona\], I \[want to\], \[so that\]."
### Students
-1. As a \[student\], I \[want to be able to traverse OnTrack in a sensible way\], so that I \[can
- submit my work with ease\].
-1. As a student, I \[want to be able to re-submit some of many files\], so that I \[do not have to
- re-upload all files related to a task\].
+1. As a \[student\], I \[want to be able to traverse OnTrack in a sensible
+ way\], so that I \[can submit my work with ease\].
+1. As a student, I \[want to be able to re-submit some of many files\], so that
+ I \[do not have to re-upload all files related to a task\].
1. As a student, I \[want to be able to see a history of events\], so that I \
- [can see the last time a file was uploaded or a message was sent by the tutor\].
-1. As a student, I \[want to be able to include comments with my uploads\], so that I \[may discuss
- the task with my tutor\].
-1. As a student, I \[want to be able to view my submissions in my browser\], so that I \[don't have
- to keep downloading copies of my submissions\].
+ [can see the last time a file was uploaded or a message was sent by the
+ tutor\].
+1. As a student, I \[want to be able to include comments with my uploads\], so
+ that I \[may discuss the task with my tutor\].
+1. As a student, I \[want to be able to view my submissions in my browser\], so
+ that I \[don't have to keep downloading copies of my submissions\].
### Tutors
-1. As a tutor, I \[want to make sure that my students understand a concept\], so that \[they can
- succeed at their studies\].
+1. As a tutor, I \[want to make sure that my students understand a concept\], so
+ that \[they can succeed at their studies\].
1. As a tutor, I \[want to see a clear log of my interactions\], so that I \
[can orient quicker\].
-1. As a tutor, I \[want to be able to compare student code files they have submitted\].
-1. As a masker, I \[want to be able to highlight and leave notes on files\], so that I \[can provide
- feedback to my students\].
+1. As a tutor, I \[want to be able to compare student code files they have
+ submitted\].
+1. As a masker, I \[want to be able to highlight and leave notes on files\], so
+ that I \[can provide feedback to my students\].
### Convenors
-1. As a convenor, I \[want tutors to interact with their students before marking\], so that \[they
- interact with their students\].
+1. As a convenor, I \[want tutors to interact with their students before
+ marking\], so that \[they interact with their students\].
### Developers
-1. As a developer, I \[want a high-fidelity wire-frame\], so that I \[can create a design that
- further incorporates OnTrack/Doubtfire's visual style and nuance\].
-1. As a developer, I \[want a UI\UX prototype of the product\], so that I \[can create documentation
- on the design\].
-1. As a developer, I \[want a UI/UX prototype of the product\], so that I \[can create a prototype
- of the design\].
-1. As a developer, I \[want more interaction facilitated by a chat-bot\], so that \[interactions are
- streamlined and feel modern\].
+1. As a developer, I \[want a high-fidelity wire-frame\], so that I \[can create
+ a design that further incorporates OnTrack/Doubtfire's visual style and
+ nuance\].
+1. As a developer, I \[want a UI\UX prototype of the product\], so that I \[can
+ create documentation on the design\].
+1. As a developer, I \[want a UI/UX prototype of the product\], so that I \[can
+ create a prototype of the design\].
+1. As a developer, I \[want more interaction facilitated by a chat-bot\], so
+ that \[interactions are streamlined and feel modern\].
## Features
diff --git a/src/content/docs/Products/OnTrack/Projects/Tutor Times/Documentation/design-back-end.md b/src/content/docs/Products/OnTrack/Projects/Tutor Times/Documentation/design-back-end.md
index c33ef58ee..f958e1776 100644
--- a/src/content/docs/Products/OnTrack/Projects/Tutor Times/Documentation/design-back-end.md
+++ b/src/content/docs/Products/OnTrack/Projects/Tutor Times/Documentation/design-back-end.md
@@ -6,14 +6,15 @@ title: Backend Design Document for "Tutor Times" Feature in OnTrack
### 1.1 Purpose
-This document outlines the design of the backend for the "Tutor Times" feature in OnTrack (formerly
-known as Doubtfire). The purpose is to establish the architectural and functional aspects of the
-backend necessary to support efficient time tracking and management for tutors.
+This document outlines the design of the backend for the "Tutor Times" feature
+in OnTrack (formerly known as Doubtfire). The purpose is to establish the
+architectural and functional aspects of the backend necessary to support
+efficient time tracking and management for tutors.
### 1.2 Scope
-The scope of this design document covers the following aspects of the backend development for the
-"Tutor Times" feature:
+The scope of this design document covers the following aspects of the backend
+development for the "Tutor Times" feature:
- Data Models and Schema
- API Endpoints
@@ -26,31 +27,32 @@ The scope of this design document covers the following aspects of the backend de
### 1.3 Intended Audience
-This document is intended for backend developers, database administrators, and stakeholders involved
-in the implementation of the "Tutor Times" feature.
+This document is intended for backend developers, database administrators, and
+stakeholders involved in the implementation of the "Tutor Times" feature.
## 2. Architecture and Data Models
-- A link for UML diagrams will be provided here in future to illustrate the architecture and data
- models for the "Tutor Times" feature.
+- A link for UML diagrams will be provided here in future to illustrate the
+ architecture and data models for the "Tutor Times" feature.
### 2.1 Data Storage
-- Create a new database table named `tutor_times` or modify an existing one to store marking time
- data for tutors and students.
-- Define fields such as `tutor_id`, `student_id`, `task_id`, `start_time`, and `end_time` to record
- marking session details.
+- Create a new database table named `tutor_times` or modify an existing one to
+ store marking time data for tutors and students.
+- Define fields such as `tutor_id`, `student_id`, `task_id`, `start_time`, and
+ `end_time` to record marking session details.
### 2.2 Data Schema
-- Define a comprehensive data schema that includes relationships between tables to support the
- required functionality.
-- Ensure that the schema accommodates storing marking time data at both the student and task levels.
+- Define a comprehensive data schema that includes relationships between tables
+ to support the required functionality.
+- Ensure that the schema accommodates storing marking time data at both the
+ student and task levels.
### 2.3 Database Relationships
-- Establish relationships between tables to associate marking time data with tutors, students, and
- tasks.
+- Establish relationships between tables to associate marking time data with
+ tutors, students, and tasks.
- Define foreign keys and indices to optimize query performance.
## 3. API Design
@@ -61,85 +63,93 @@ in the implementation of the "Tutor Times" feature.
- Implement the following endpoints:
- `POST /api/tutor-times`: Create a new marking session record.
- `GET /api/tutor-times/:id`: Retrieve a specific marking session record.
- - `GET /api/tutor-times/tutor/:tutor_id`: Retrieve all marking session records for a specific
- tutor.
- - `GET /api/tutor-times/student/:student_id`: Retrieve all marking session records for a specific
- student.
+ - `GET /api/tutor-times/tutor/:tutor_id`: Retrieve all marking session records
+ for a specific tutor.
+ - `GET /api/tutor-times/student/:student_id`: Retrieve all marking session
+ records for a specific student.
- `PUT /api/tutor-times/:id`: Update an existing marking session record.
- `DELETE /api/tutor-times/:id`: Delete a marking session record.
### 3.2 Authentication and Authorisation
-- Implement user authentication and authorisation to secure access to marking time data.
-- Ensure that only authorised users (tutors and unit chairs) can perform CRUD operations on marking
- session records.
+- Implement user authentication and authorisation to secure access to marking
+ time data.
+- Ensure that only authorised users (tutors and unit chairs) can perform CRUD
+ operations on marking session records.
## 4. Background Jobs/Triggers
### 4.1 Calculation of Marking Time Totals
-- Develop background jobs or database triggers to calculate and update total marking time for each
- tutor and student.
-- The system should automatically update marking time totals when new marking session records are
- added or modified.
+- Develop background jobs or database triggers to calculate and update total
+ marking time for each tutor and student.
+- The system should automatically update marking time totals when new marking
+ session records are added or modified.
## 5. Data Integrity and Validation
### 5.1 Data Integrity Constraints
-- Implement data integrity constraints to ensure the accuracy and consistency of data.
-- Enforce rules such as referential integrity and data type validation to maintain data quality.
+- Implement data integrity constraints to ensure the accuracy and consistency of
+ data.
+- Enforce rules such as referential integrity and data type validation to
+ maintain data quality.
## 6. Non-Functional Requirements
### 6.1 Performance Optimization
-- Optimize database queries and operations to ensure fast data retrieval, even as the volume of
- marking time records grows.
-- Implement caching mechanisms to reduce query load and enhance system performance.
+- Optimize database queries and operations to ensure fast data retrieval, even
+ as the volume of marking time records grows.
+- Implement caching mechanisms to reduce query load and enhance system
+ performance.
### 6.2 Security Measures
-- Implement necessary security measures to protect marking time data and prevent unauthorized
- access.
+- Implement necessary security measures to protect marking time data and prevent
+ unauthorized access.
- Use encryption to secure sensitive data, such as user credentials.
### 6.3 Compatibility
- Ensure compatibility with the frontend and other system components.
-- Verify that the API endpoints work seamlessly with modern web browsers and other clients.
+- Verify that the API endpoints work seamlessly with modern web browsers and
+ other clients.
## 7. Testing Strategy
### 7.1 Unit Testing
-- Develop comprehensive unit tests for API endpoints, database interactions, and background jobs to
- ensure the correctness and reliability of backend components.
+- Develop comprehensive unit tests for API endpoints, database interactions, and
+ background jobs to ensure the correctness and reliability of backend
+ components.
### 7.2 Integration Testing
-- Perform integration testing to verify the seamless integration of backend components with the
- frontend and other system modules.
+- Perform integration testing to verify the seamless integration of backend
+ components with the frontend and other system modules.
## 8. Deployment Plan
### 8.1 Deployment Environment
-- Deploy the backend of the "Tutor Times" feature to the production environment of OnTrack.
+- Deploy the backend of the "Tutor Times" feature to the production environment
+ of OnTrack.
### 8.2 Deployment Process
-- Follow a systematic deployment process to release backend updates, including version control and
- continuous integration practices.
+- Follow a systematic deployment process to release backend updates, including
+ version control and continuous integration practices.
## 9. Conclusion
-This design document provides a detailed plan for the backend implementation of the "Tutor Times"
-feature in OnTrack. It covers the architectural aspects, data models, API design, security measures,
-testing strategies, and deployment plans. By following this design, we ensure the reliable and
-efficient operation of the "Tutor Times" feature, enhancing the user experience for tutors and
-students.
+This design document provides a detailed plan for the backend implementation of
+the "Tutor Times" feature in OnTrack. It covers the architectural aspects, data
+models, API design, security measures, testing strategies, and deployment plans.
+By following this design, we ensure the reliable and efficient operation of the
+"Tutor Times" feature, enhancing the user experience for tutors and students.
## 10. Appendices
-- Include any additional information, diagrams, or references that support the design document.
+- Include any additional information, diagrams, or references that support the
+ design document.
diff --git a/src/content/docs/Products/OnTrack/Projects/Tutor Times/Documentation/design-front-end.md b/src/content/docs/Products/OnTrack/Projects/Tutor Times/Documentation/design-front-end.md
index 7135b73a4..299c63779 100644
--- a/src/content/docs/Products/OnTrack/Projects/Tutor Times/Documentation/design-front-end.md
+++ b/src/content/docs/Products/OnTrack/Projects/Tutor Times/Documentation/design-front-end.md
@@ -6,28 +6,29 @@ title: Frontend Design Document for "Tutor Times" Feature in OnTrack
### 1.1 Purpose
-This document outlines the design of the frontend for the "Tutor Times" feature in OnTrack (formerly
-known as Doubtfire). The purpose is to provide an intuitive and user-friendly interface for tutors
-to track and manage the time spent on providing feedback to students.
+This document outlines the design of the frontend for the "Tutor Times" feature
+in OnTrack (formerly known as Doubtfire). The purpose is to provide an intuitive
+and user-friendly interface for tutors to track and manage the time spent on
+providing feedback to students.
### 1.2 Scope
-The scope of this design document covers the user interface (UI) and user experience (UX) aspects of
-the "Tutor Times" feature within the OnTrack Learning Management System. This feature will enhance
-the skill-based course delivery model by enabling tutors to monitor their time management
-efficiently.
+The scope of this design document covers the user interface (UI) and user
+experience (UX) aspects of the "Tutor Times" feature within the OnTrack Learning
+Management System. This feature will enhance the skill-based course delivery
+model by enabling tutors to monitor their time management efficiently.
### 1.3 Intended Audience
-This document is intended for frontend developers, designers, and stakeholders involved in the
-implementation of the "Tutor Times" feature.
+This document is intended for frontend developers, designers, and stakeholders
+involved in the implementation of the "Tutor Times" feature.
## 2. User Interface (UI) Design
### 2.1 Overview
-The "Tutor Times" feature will seamlessly integrate into the existing OnTrack UI, maintaining a
-cohesive visual identity and navigation structure.
+The "Tutor Times" feature will seamlessly integrate into the existing OnTrack
+UI, maintaining a cohesive visual identity and navigation structure.
### 2.2 Wireframes and Mockups
@@ -35,20 +36,22 @@ cohesive visual identity and navigation structure.
- A link will be provided here in future to the mockup for the Dashboard.
-- The dashboard provides an overview of marking time statistics, including total time spent, average
- time per student, and notifications.
+- The dashboard provides an overview of marking time statistics, including total
+ time spent, average time per student, and notifications.
#### 2.2.2 Student Feedback Page
-- A link will be provided here in future to the mockup for the Student Feedback Page.
+- A link will be provided here in future to the mockup for the Student Feedback
+ Page.
-- The Student Feedback Page displays a list of students and their respective marking times. Tutors
- can start, stop, or manually input time for each student.
+- The Student Feedback Page displays a list of students and their respective
+ marking times. Tutors can start, stop, or manually input time for each
+ student.
### 2.3 Responsive Design
-The UI will be responsive to ensure a consistent user experience across various devices, including
-desktops, tablets, and mobile phones.
+The UI will be responsive to ensure a consistent user experience across various
+devices, including desktops, tablets, and mobile phones.
### 2.4 Colour Scheme
@@ -64,27 +67,29 @@ desktops, tablets, and mobile phones.
### 2.6 Icons
-Standard icons will be used for actions such as starting and stopping timers, along with custom
-icons for notifications.
+Standard icons will be used for actions such as starting and stopping timers,
+along with custom icons for notifications.
### 2.7 Navigation
-The "Tutor Times" feature will be accessible through the main navigation menu within OnTrack. Clear
-breadcrumbs will guide users through the application.
+The "Tutor Times" feature will be accessible through the main navigation menu
+within OnTrack. Clear breadcrumbs will guide users through the application.
### 2.8 Forms and Inputs
-Input forms will include text fields for manual time input, along with start and stop buttons for
-timers. Error handling will include validation and user-friendly error messages.
+Input forms will include text fields for manual time input, along with start and
+stop buttons for timers. Error handling will include validation and
+user-friendly error messages.
### 2.9 Notifications
-Notifications will be displayed at the top of the dashboard, providing real-time feedback on marking
-progress and milestones.
+Notifications will be displayed at the top of the dashboard, providing real-time
+feedback on marking progress and milestones.
### 2.10 User Profiles
-Tutors will have access to their profiles to view personal information and settings.
+Tutors will have access to their profiles to view personal information and
+settings.
## 3. User Experience (UX) Design
@@ -101,43 +106,47 @@ Tutors will have access to their profiles to view personal information and setti
### 3.2 Accessibility
-Accessibility features will be implemented, including alt text for images, keyboard navigation, and
-screen reader compatibility.
+Accessibility features will be implemented, including alt text for images,
+keyboard navigation, and screen reader compatibility.
### 3.3 Usability
-The UI will prioritize usability, with clear and intuitive interactions, ensuring tutors can
-efficiently manage marking times.
+The UI will prioritize usability, with clear and intuitive interactions,
+ensuring tutors can efficiently manage marking times.
### 3.4 User Feedback
-A feedback mechanism will be incorporated for users to report issues or suggest improvements,
-enhancing the feature over time.
+A feedback mechanism will be incorporated for users to report issues or suggest
+improvements, enhancing the feature over time.
## 4. Interactive Features
### 4.1 Timer/Stopwatch Feature
-- Tutors can start, stop, and reset timers to track marking time for each student accurately.
+- Tutors can start, stop, and reset timers to track marking time for each
+ student accurately.
### 4.2 Manual Time Input
-- Tutors have the option to manually input marking time for students, providing flexibility in time
- tracking.
+- Tutors have the option to manually input marking time for students, providing
+ flexibility in time tracking.
### 4.3 Notification System
-- Real-time notifications will alert tutors of milestones and progress, enhancing user engagement.
+- Real-time notifications will alert tutors of milestones and progress,
+ enhancing user engagement.
## 5. Performance Considerations
### 5.1 Page Load Times
-Efforts will be made to optimize page load times to ensure a seamless user experience.
+Efforts will be made to optimize page load times to ensure a seamless user
+experience.
### 5.2 Caching
-Caching mechanisms will be implemented to reduce load times and improve overall performance.
+Caching mechanisms will be implemented to reduce load times and improve overall
+performance.
## 6. Compatibility
@@ -148,8 +157,8 @@ Caching mechanisms will be implemented to reduce load times and improve overall
### 6.2 Device Compatibility
-Responsive design will ensure compatibility with various devices, including desktops, tablets, and
-mobile phones.
+Responsive design will ensure compatibility with various devices, including
+desktops, tablets, and mobile phones.
## 7. Security
@@ -159,30 +168,32 @@ mobile phones.
### 7.2 HTTPS
-- HTTPS will be enforced to secure data transmission between the frontend and backend.
+- HTTPS will be enforced to secure data transmission between the frontend and
+ backend.
## 8. Version Control and Collaboration
### 8.1 Version Control
-- Git will be used for version control, following a branching strategy for collaborative
- development.
+- Git will be used for version control, following a branching strategy for
+ collaborative development.
### 8.2 Collaboration Tools
-- Tools like Slack and project management software will facilitate communication among team members.
+- Tools like Slack and project management software will facilitate communication
+ among team members.
## 9. Testing Plan
### 9.1 Unit Testing
-- Unit tests will be developed for frontend components, including timers, input forms, and
- notifications.
+- Unit tests will be developed for frontend components, including timers, input
+ forms, and notifications.
### 9.2 User Acceptance Testing
-- User acceptance testing (UAT) will ensure that the "Tutor Times" feature meets user requirements
- and expectations.
+- User acceptance testing (UAT) will ensure that the "Tutor Times" feature meets
+ user requirements and expectations.
## 10. Deployment Plan
@@ -192,19 +203,22 @@ mobile phones.
### 10.2 Deployment Process
-- A systematic deployment process will be followed to release frontend updates to the live
- environment.
+- A systematic deployment process will be followed to release frontend updates
+ to the live environment.
## 11. Conclusion
-This design document provides a comprehensive plan for the frontend implementation of the "Tutor
-Times" feature in OnTrack. It outlines the UI/UX design, interactive features, performance
-considerations, compatibility, security measures, and testing strategies. This design will enhance
-the learning experience for tutors and students, promoting efficient time management and feedback
-delivery.
+This design document provides a comprehensive plan for the frontend
+implementation of the "Tutor Times" feature in OnTrack. It outlines the UI/UX
+design, interactive features, performance considerations, compatibility,
+security measures, and testing strategies. This design will enhance the learning
+experience for tutors and students, promoting efficient time management and
+feedback delivery.
## 12. Appendices
-- Once the UI and UX designs are finalized, links will be provided to the mockups.
+- Once the UI and UX designs are finalized, links will be provided to the
+ mockups.
- Once the UML diagrams are finalized, links will be provided to the diagrams.
-- Once the feature is implemented, a link will be provided to the frontend repository.
+- Once the feature is implemented, a link will be provided to the frontend
+ repository.
diff --git a/src/content/docs/Products/OnTrack/Projects/Tutor Times/Documentation/requrirements-back-end.md b/src/content/docs/Products/OnTrack/Projects/Tutor Times/Documentation/requrirements-back-end.md
index b029b0fe2..49219d7dd 100644
--- a/src/content/docs/Products/OnTrack/Projects/Tutor Times/Documentation/requrirements-back-end.md
+++ b/src/content/docs/Products/OnTrack/Projects/Tutor Times/Documentation/requrirements-back-end.md
@@ -6,53 +6,56 @@ title: Backend Requirements Document
## Table of Contents
-1. [Introduction](#1-introduction) 1.1 [Purpose](#11-purpose) 1.2 [Scope] (#12-scope) 1.3
- [Intended Audience](#13-intended-audience)
+1. [Introduction](#1-introduction) 1.1 [Purpose](#11-purpose) 1.2 [Scope]
+ (#12-scope) 1.3 [Intended Audience](#13-intended-audience)
-2. [Functional Requirements](#2-functional-requirements) 2.1 [Data Storage] (#21-data-storage) 2.2
- [API Endpoints](#22-api-endpoints) 2.3
+2. [Functional Requirements](#2-functional-requirements) 2.1 [Data Storage]
+ (#21-data-storage) 2.2 [API Endpoints](#22-api-endpoints) 2.3
[Authentication and Authorisation](#23-authentication-and-authorisation) 2.4
- [Background Jobs/Triggers](#24-background-jobstriggers) 2.5 [Data Schema](#25-data-schema)
+ [Background Jobs/Triggers](#24-background-jobstriggers) 2.5
+ [Data Schema](#25-data-schema)
-3. [Non-Functional Requirements](#3-non-functional-requirements) 3.1 [Performance](#31-performance)
- 3.2 [Security](#32-security) 3.3 [Compatibility](#33-compatibility)
+3. [Non-Functional Requirements](#3-non-functional-requirements) 3.1
+ [Performance](#31-performance) 3.2 [Security](#32-security) 3.3
+ [Compatibility](#33-compatibility)
4. [User Stories](#4-user-stories) 4.1 [As a tutor...](#41-as-a-tutor) 4.2
- [As a unit chair...](#42-as-a-unit-chair) 4.3 [As a unit chair...](#43-as-a-unit-chair)
+ [As a unit chair...](#42-as-a-unit-chair) 4.3
+ [As a unit chair...](#43-as-a-unit-chair)
-5. [Database Schema](#5-database-schema) 5.1 [Tables and Fields] (#51-tables-and-fields) 5.2
- [Relationships](#52-relationships) 5.3
+5. [Database Schema](#5-database-schema) 5.1 [Tables and Fields]
+ (#51-tables-and-fields) 5.2 [Relationships](#52-relationships) 5.3
[Data Integrity Constraints](#53-data-integrity-constraints)
-6. [Testing Requirements](#6-testing-requirements) 6.1 [Unit Testing] (#61-unit-testing) 6.2
- [Integration Testing](#62-integration-testing)
+6. [Testing Requirements](#6-testing-requirements) 6.1 [Unit Testing]
+ (#61-unit-testing) 6.2 [Integration Testing](#62-integration-testing)
## 1. Introduction
### 1.1 Purpose
-The purpose of this document is to outline the requirements for the backend development of the
-"Tutor Times" feature. This feature will enable the storage and retrieval of marking time data for
-tutors and students.
+The purpose of this document is to outline the requirements for the backend
+development of the "Tutor Times" feature. This feature will enable the storage
+and retrieval of marking time data for tutors and students.
### 1.2 Scope
-The scope of this document covers the functional and non-functional requirements for the backend
-implementation of the "Tutor Times" feature.
+The scope of this document covers the functional and non-functional requirements
+for the backend implementation of the "Tutor Times" feature.
### 1.3 Intended Audience
-This document is intended for backend developers and the development team responsible for
-implementing the "Tutor Times" feature.
+This document is intended for backend developers and the development team
+responsible for implementing the "Tutor Times" feature.
## 2. Functional Requirements
### 2.1 Data Storage
-- Create a new database table named `tutor_times` or modify an existing one to store marking time
- data for tutors and students.
-- Define fields such as `tutor_id`, `student_id`, `task_id`, `start_time`, and `end_time` to record
- marking session details.
+- Create a new database table named `tutor_times` or modify an existing one to
+ store marking time data for tutors and students.
+- Define fields such as `tutor_id`, `student_id`, `task_id`, `start_time`, and
+ `end_time` to record marking session details.
### 2.2 API Endpoints
@@ -60,67 +63,71 @@ implementing the "Tutor Times" feature.
- Implement the following endpoints:
- `POST /api/tutor-times`: Create a new marking session record.
- `GET /api/tutor-times/:id`: Retrieve a specific marking session record.
- - `GET /api/tutor-times/tutor/:tutor_id`: Retrieve all marking session records for a specific
- tutor.
- - `GET /api/tutor-times/student/:student_id`: Retrieve all marking session records for a specific
- student.
+ - `GET /api/tutor-times/tutor/:tutor_id`: Retrieve all marking session records
+ for a specific tutor.
+ - `GET /api/tutor-times/student/:student_id`: Retrieve all marking session
+ records for a specific student.
- `PUT /api/tutor-times/:id`: Update an existing marking session record.
- `DELETE /api/tutor-times/:id`: Delete a marking session record.
### 2.3 Authentication and Authorisation
-- Implement user authentication and authorisation to secure access to marking time data.
-- Ensure that only authorised users (tutors and unit chairs) can perform CRUD operations on marking
- session records.
+- Implement user authentication and authorisation to secure access to marking
+ time data.
+- Ensure that only authorised users (tutors and unit chairs) can perform CRUD
+ operations on marking session records.
### 2.4 Background Jobs/Triggers
-- Develop background jobs or database triggers to calculate and update total marking time for each
- tutor and student.
-- The system should automatically update marking time totals when new marking session records are
- added or modified.
+- Develop background jobs or database triggers to calculate and update total
+ marking time for each tutor and student.
+- The system should automatically update marking time totals when new marking
+ session records are added or modified.
### 2.5 Data Schema
-- Define a comprehensive data schema that includes relationships between tables to support the
- required functionality.
-- Ensure that the schema accommodates storing marking time data at both the student and task levels.
+- Define a comprehensive data schema that includes relationships between tables
+ to support the required functionality.
+- Ensure that the schema accommodates storing marking time data at both the
+ student and task levels.
## 3. Non-Functional Requirements
### 3.1 Performance
-- Optimize database queries and operations to ensure fast data retrieval, even as the volume of
- marking time records grows.
-- Implement caching mechanisms to reduce query load and enhance system performance.
+- Optimize database queries and operations to ensure fast data retrieval, even
+ as the volume of marking time records grows.
+- Implement caching mechanisms to reduce query load and enhance system
+ performance.
### 3.2 Security
-- Implement necessary security measures to protect marking time data and prevent unauthorised
- access.
+- Implement necessary security measures to protect marking time data and prevent
+ unauthorised access.
- Use encryption to secure sensitive data, such as user credentials.
### 3.3 Compatibility
- Ensure compatibility with the frontend and other system components.
-- Verify that the API endpoints work seamlessly with modern web browsers and other clients.
+- Verify that the API endpoints work seamlessly with modern web browsers and
+ other clients.
## 4. User Stories
### 4.1 As a tutor
-- Tutors should be able to view their marking time data on the frontend interface, which is
- retrieved from the backend via API calls.
+- Tutors should be able to view their marking time data on the frontend
+ interface, which is retrieved from the backend via API calls.
### 4.2 As a unit chair
-- Unit chairs should have access to total marking time data for each tutor through the frontend
- interface.
+- Unit chairs should have access to total marking time data for each tutor
+ through the frontend interface.
### 4.3 As a unit chair
-- Unit chairs should be able to see marking time data at the task level through the frontend
- interface.
+- Unit chairs should be able to see marking time data at the task level through
+ the frontend interface.
## 5. Database Schema
@@ -133,21 +140,23 @@ implementing the "Tutor Times" feature.
### 5.2 Relationships
-- Establish relationships between tables to associate marking time data with tutors, students, and
- tasks.
+- Establish relationships between tables to associate marking time data with
+ tutors, students, and tasks.
### 5.3 Data Integrity Constraints
-- Implement data integrity constraints to ensure the accuracy and consistency of data.
+- Implement data integrity constraints to ensure the accuracy and consistency of
+ data.
## 6. Testing Requirements
### 6.1 Unit Testing
-- Develop comprehensive unit tests for API endpoints, database interactions, and background jobs to
- ensure the correctness and reliability of backend components.
+- Develop comprehensive unit tests for API endpoints, database interactions, and
+ background jobs to ensure the correctness and reliability of backend
+ components.
### 6.2 Integration Testing
-- Perform integration testing to verify the seamless integration of backend components with the
- frontend and other system modules.
+- Perform integration testing to verify the seamless integration of backend
+ components with the frontend and other system modules.
diff --git a/src/content/docs/Products/OnTrack/Projects/Tutor Times/Documentation/requrirements-front-end.md b/src/content/docs/Products/OnTrack/Projects/Tutor Times/Documentation/requrirements-front-end.md
index 9149d7cc7..c54522a96 100644
--- a/src/content/docs/Products/OnTrack/Projects/Tutor Times/Documentation/requrirements-front-end.md
+++ b/src/content/docs/Products/OnTrack/Projects/Tutor Times/Documentation/requrirements-front-end.md
@@ -44,47 +44,50 @@ title: Frontend Requirements Document
### 1.1 Purpose
-The purpose of this document is to outline the requirements for the frontend development of the
-"Tutor Times" feature. This feature will enable tutors to track and manage the time spent on
-providing feedback to students.
+The purpose of this document is to outline the requirements for the frontend
+development of the "Tutor Times" feature. This feature will enable tutors to
+track and manage the time spent on providing feedback to students.
### 1.2 Scope
-The scope of this document covers the functional and non-functional requirements for the frontend
-implementation of the "Tutor Times" feature.
+The scope of this document covers the functional and non-functional requirements
+for the frontend implementation of the "Tutor Times" feature.
### 1.3 Intended Audience
-This document is intended for frontend developers and the development team responsible for
-implementing the "Tutor Times" feature.
+This document is intended for frontend developers and the development team
+responsible for implementing the "Tutor Times" feature.
## 2. Functional Requirements
### 2.1 Tutor's Marking Progress Page
-- Create a dedicated page/dashboard where tutors can view their marking progress.
+- Create a dedicated page/dashboard where tutors can view their marking
+ progress.
- Display the time spent providing feedback to each student.
### 2.2 User Interface
-- Design an intuitive and user-friendly interface for the Tutor's Marking Progress Page.
+- Design an intuitive and user-friendly interface for the Tutor's Marking
+ Progress Page.
- Ensure responsive design for various screen sizes and devices.
- Provide an option for tutors to manually input marking time.
### 2.3 Timer/Stopwatch Feature
-- Implement a timer or stopwatch feature that tutors can start and stop to track time spent on each
- student.
+- Implement a timer or stopwatch feature that tutors can start and stop to track
+ time spent on each student.
- Ensure accuracy in time tracking.
### 2.4 Manual Time Input
-- Allow tutors to manually input marking time for each student in case they forget to start or stop
- the timer.
+- Allow tutors to manually input marking time for each student in case they
+ forget to start or stop the timer.
### 2.5 Notification System
-- Implement a notification system to alert tutors when they reach specific time milestones.
+- Implement a notification system to alert tutors when they reach specific time
+ milestones.
### 3. Non-Functional Requirements
@@ -105,17 +108,18 @@ implementing the "Tutor Times" feature.
## 3.4 Security
-- Implement necessary security measures to protect user data and prevent unauthorized access to
- arking time records.
+- Implement necessary security measures to protect user data and prevent
+ unauthorized access to arking time records.
## 4. User Stories
### 4.1 User Story 1
-**# As a tutor, I want to see how long I have spent providing feedback to each student.**
+**# As a tutor, I want to see how long I have spent providing feedback to each
+student.**
-- Tutors should be able to view the time spent on each student's feedback on the Tutor's Marking
- Progress Page.
+- Tutors should be able to view the time spent on each student's feedback on the
+ Tutor's Marking Progress Page.
### 4.2 User Story 2
@@ -123,28 +127,30 @@ implementing the "Tutor Times" feature.
feedback to each student.\*\*
-- Unit chairs should have access to view the total marking time for each tutor on the Tutor's
- Marking Progress Page.
+- Unit chairs should have access to view the total marking time for each tutor
+ on the Tutor's Marking Progress Page.
## 4.3 User Story 3
-**As a unit chair, I want to see how long each tutor has spent providing feedback to each task.**
+**As a unit chair, I want to see how long each tutor has spent providing
+feedback to each task.**
-- Unit chairs should be able to see the time spent by each tutor on specific tasks on the Tutor's
- Marking Progress Page.
+- Unit chairs should be able to see the time spent by each tutor on specific
+ tasks on the Tutor's Marking Progress Page.
## 5. Design Mockups
-- Link will be provided to the design mockups for the Tutor's Marking Progress Page.
+- Link will be provided to the design mockups for the Tutor's Marking Progress
+ Page.
## 6. Testing Requirements
## 6.1 Unit Testing
-- Develop unit tests to ensure the correctness and reliability of frontend components, including
- timers, manual input, and notifications.
+- Develop unit tests to ensure the correctness and reliability of frontend
+ components, including timers, manual input, and notifications.
## 6.2 User Acceptance Testing
-- Conduct user acceptance testing to verify that the "Tutor Times" feature meets the requirement and
- user expectations.
+- Conduct user acceptance testing to verify that the "Tutor Times" feature meets
+ the requirement and user expectations.
diff --git a/src/content/docs/Products/OnTrack/index.mdx b/src/content/docs/Products/OnTrack/index.mdx
index 4a8e617b2..cb4c1cb6a 100644
--- a/src/content/docs/Products/OnTrack/index.mdx
+++ b/src/content/docs/Products/OnTrack/index.mdx
@@ -9,10 +9,11 @@ sidebar:
import { Card, LinkCard, CardGrid, Icon } from "@astrojs/starlight/components";
- OnTrack is an advanced platform designed to streamline assignment submissions and feedback
- management for students and educators. As a flagship product of Thoth Tech, OnTrack redefines how
- students engage with assignments and feedback, making the learning experience more efficient,
- transparent, and impactful.
+ OnTrack is an advanced platform designed to streamline assignment submissions
+ and feedback management for students and educators. As a flagship product of
+ Thoth Tech, OnTrack redefines how students engage with assignments and
+ feedback, making the learning experience more efficient, transparent, and
+ impactful.
@@ -26,29 +27,41 @@ import { Card, LinkCard, CardGrid, Icon } from "@astrojs/starlight/components";
## thoth-tech GitHub repos
-
-
-
+
+
+
## About Us
-This team is working on the production of an innovative Learning Management System that is designed
-for a skill-based course delivery model.
+This team is working on the production of an innovative Learning Management
+System that is designed for a skill-based course delivery model.
-Students will gain real experience thourgh regular practice receive rapid feedback on their work on
-a weekly basis. This platform is used to connect tutors and students at Deakin university as well as
-other universities around the world.
+Students will gain real experience thourgh regular practice receive rapid
+feedback on their work on a weekly basis. This platform is used to connect
+tutors and students at Deakin university as well as other universities around
+the world.
## What is OnTrack?
-OnTrack is a powerful learning tool built to simplify and enhance the academic process. It empowers
-students to submit assignments with ease, receive meaningful feedback, and monitor their progress
-throughout their academic journey. OnTrack bridges the gap between students and educators, fostering
+OnTrack is a powerful learning tool built to simplify and enhance the academic
+process. It empowers students to submit assignments with ease, receive
+meaningful feedback, and monitor their progress throughout their academic
+journey. OnTrack bridges the gap between students and educators, fostering
collaboration and continuous improvement.
-By addressing the inefficiencies in traditional assignment workflows, OnTrack ensures that students
-and educators can focus on what truly matters: learning and teaching.
+By addressing the inefficiencies in traditional assignment workflows, OnTrack
+ensures that students and educators can focus on what truly matters: learning
+and teaching.
## Key Features
@@ -80,8 +93,8 @@ and educators can focus on what truly matters: learning and teaching.
### For Students
-- Ensures a stress-free workflow by automating repetitive tasks like deadline reminders and
- submission confirmations.
+- Ensures a stress-free workflow by automating repetitive tasks like deadline
+ reminders and submission confirmations.
- Promotes accountability by maintaining a record of submissions and feedback.
- Helps students stay focused on learning with clear, actionable insights.
@@ -89,41 +102,45 @@ and educators can focus on what truly matters: learning and teaching.
- Simplifies the grading process, saving time and effort.
- Provides structured feedback tools for consistent evaluation.
-- Enhances communication with students, ensuring queries are resolved efficiently.
+- Enhances communication with students, ensuring queries are resolved
+ efficiently.
## Integration with Thoth Tech
-OnTrack is a cornerstone of Thoth Tech, a student-led initiative under the guidance of senior
-professors. As part of the ecosystem, it shares Thoth Tech's vision of creating innovative solutions
-that address real-world challenges in education. OnTrack complements other Thoth Tech products by
-fostering a feedback-driven culture in learning environments.
+OnTrack is a cornerstone of Thoth Tech, a student-led initiative under the
+guidance of senior professors. As part of the ecosystem, it shares Thoth Tech's
+vision of creating innovative solutions that address real-world challenges in
+education. OnTrack complements other Thoth Tech products by fostering a
+feedback-driven culture in learning environments.
## Why OnTrack for Team Project A?
-OnTrack is an ideal choice for students participating in Team Project A. It offers a unique
-opportunity to contribute to a project that directly impacts education while gaining valuable
-skills.
+OnTrack is an ideal choice for students participating in Team Project A. It
+offers a unique opportunity to contribute to a project that directly impacts
+education while gaining valuable skills.
### Key Benefits for Team Project A
-- **Real-World Relevance:** Work on an active product solving critical challenges in education.
-- **Skill Building:** Gain hands-on experience in software development, user experience design, and
- project management.
-- **Scalable Impact:** Contribute to a tool with the potential for adoption beyond your university,
- leaving a lasting legacy.
-- **Creative Freedom:** Innovate within a flexible framework, focusing on improving the student and
- educator experience.
+- **Real-World Relevance:** Work on an active product solving critical
+ challenges in education.
+- **Skill Building:** Gain hands-on experience in software development, user
+ experience design, and project management.
+- **Scalable Impact:** Contribute to a tool with the potential for adoption
+ beyond your university, leaving a lasting legacy.
+- **Creative Freedom:** Innovate within a flexible framework, focusing on
+ improving the student and educator experience.
## How OnTrack Stands Out
OnTrack distinguishes itself from other projects with its:
-- **Student-Centric Design:** Built for students, shaped by their input, and continuously improved
- based on their feedback.
-- **Holistic Learning Approach:** Encourages students to think beyond technical skills,
- incorporating design thinking and collaboration.
-- **Tangible Outcomes:** Delivers measurable benefits to users, ensuring every contribution has a
- lasting impact.
+- **Student-Centric Design:** Built for students, shaped by their input, and
+ continuously improved based on their feedback.
+- **Holistic Learning Approach:** Encourages students to think beyond technical
+ skills, incorporating design thinking and collaboration.
+- **Tangible Outcomes:** Delivers measurable benefits to users, ensuring every
+ contribution has a lasting impact.
-By joining OnTrack, students in Team Project A can make a meaningful contribution to education while
-building the skills they need for their future careers.
+By joining OnTrack, students in Team Project A can make a meaningful
+contribution to education while building the skills they need for their future
+careers.
diff --git a/src/content/docs/Products/SplashKit/01-overview.mdx b/src/content/docs/Products/SplashKit/01-overview.mdx
index 9b7ca0223..51608286e 100644
--- a/src/content/docs/Products/SplashKit/01-overview.mdx
+++ b/src/content/docs/Products/SplashKit/01-overview.mdx
@@ -8,9 +8,10 @@ sidebar:
import { LinkCard, Tabs, TabItem } from "@astrojs/starlight/components";
-Welcome to the SplashKit onboarding section! This set of guides will help you get started with
-using, contributing to, and maintaining all things SplashKit. The onboarding materials are designed
-to be useful both as a step-by-step walkthrough for newcomers and as a reference for ongoing tasks.
+Welcome to the SplashKit onboarding section! This set of guides will help you
+get started with using, contributing to, and maintaining all things SplashKit.
+The onboarding materials are designed to be useful both as a step-by-step
+walkthrough for newcomers and as a reference for ongoing tasks.
## Getting Started
@@ -34,7 +35,8 @@ If you're new to SplashKit, start with these foundational guides:
## Contributing to the SplashKit
-Ready to contribute? These guides cover how to make your own contributions to your SplashKit team
+Ready to contribute? These guides cover how to make your own contributions to
+your SplashKit team
@@ -71,8 +73,8 @@ Ready to contribute? These guides cover how to make your own contributions to yo
---
-When you have made your first contribution and are ready for feedback follow these guides on how to
-make a pull request
+When you have made your first contribution and are ready for feedback follow
+these guides on how to make a pull request
@@ -124,27 +132,32 @@ This will install all the necessary packages listed in the `package.json` file.
### 8. Installing WSL
-WSL is a built-in Linux distribution virtual machine for Windows. splashkit-core will be installed
-to the Linux distribution. The official SplashKit installation instructions can be found here:
-[Windows (WSL) Installation Overview](https://splashkit.io/installation/windows-wsl).
+WSL is a built-in Linux distribution virtual machine for Windows. splashkit-core
+will be installed to the Linux distribution. The official SplashKit installation
+instructions can be found here: [Windows (WSL) Installation
+Overview](https://splashkit.io/installation/windows-wsl).
### 9. Installing Windows Terminal (optional)
-Windows Terminal is an updated Command Prompt with many useful features. It is not mandatory to
-install, however it is recommended due to its ease of use. More information about Windows Terminal
-can be found here: [Windows Terminal](https://learn.microsoft.com/en-us/windows/terminal/).It can be
-installed through the Microsoft Store.
+Windows Terminal is an updated Command Prompt with many useful features. It is
+not mandatory to install, however it is recommended due to its ease of use. More
+information about Windows Terminal can be found here:
+[Windows Terminal](https://learn.microsoft.com/en-us/windows/terminal/).It can
+be installed through the Microsoft Store.
-By default, new tabs will open as a Command Prompt, with WSL terminals being accessible with the
-down-arrow. Since WSL will be used so frequently, there is the option to change the default tab to
-WSL. Open the settings by clicking the down arrow and selecting ‘Settings’. In the ‘Startup’ tab,
-‘Default profile’ allows you to change the default tab type to WSL.
+By default, new tabs will open as a Command Prompt, with WSL terminals being
+accessible with the down-arrow. Since WSL will be used so frequently, there is
+the option to change the default tab to WSL. Open the settings by clicking the
+down arrow and selecting ‘Settings’. In the ‘Startup’ tab, ‘Default profile’
+allows you to change the default tab type to WSL.
## Contributing
-You are should now be ready to start contributing to your team project. Make sure you head over to
-the [Planner board etiquette](/products/splashkit/07-planner-board) page and familiarize yourself
-with how to correctly interact with the Microsoft Teams Planner board and then get started.
+You are should now be ready to start contributing to your team project. Make
+sure you head over to the
+[Planner board etiquette](/products/splashkit/07-planner-board) page and
+familiarize yourself with how to correctly interact with the Microsoft Teams
+Planner board and then get started.
-If you are unfamiliar with how to start contributing through GitHub, check out the
-[GitHub guide.](/products/splashkit/03-github-guide)
+If you are unfamiliar with how to start contributing through GitHub, check out
+the [GitHub guide.](/products/splashkit/03-github-guide)
diff --git a/src/content/docs/Products/SplashKit/03-github-guide.mdx b/src/content/docs/Products/SplashKit/03-github-guide.mdx
index 1b3b27d46..1a5e5a263 100644
--- a/src/content/docs/Products/SplashKit/03-github-guide.mdx
+++ b/src/content/docs/Products/SplashKit/03-github-guide.mdx
@@ -9,16 +9,20 @@ import { Steps, Aside, Tabs, TabItem } from "@astrojs/starlight/components";
## Set up a Working Environment for SplashKit
-Here's a step-by-step guide on how to set up a working environment for SplashKit with GitHub
+Here's a step-by-step guide on how to set up a working environment for SplashKit
+with GitHub
### 1. Install Git
-First step is to ensure you have the necessary tools installed on your workspace. Follow the guide
-in the [setting up](/products/splashkit/02-setting-up) section and then proceed to step 2
+First step is to ensure you have the necessary tools installed on your
+workspace. Follow the guide in the
+[setting up](/products/splashkit/02-setting-up) section and then proceed to step
+2
### 2. Fork a GitHub Repository
-- Log in to GitHub: Go to [GitHub](https://github.com/) and log in with your details.
+- Log in to GitHub: Go to [GitHub](https://github.com/) and log in with your
+ details.
- Find the Repository:
@@ -30,8 +34,9 @@ in the [setting up](/products/splashkit/02-setting-up) section and then proceed

-- Fork the Repo: Click the "Fork" button at the top right of the repository page and create a new
- fork of the repository. 
+- Fork the Repo: Click the "Fork" button at the top right of the repository page
+ and create a new fork of the repository.
+ 
### 3. Clone the Forked Repository
@@ -45,8 +50,8 @@ SplashKit Core cloning is slightly different! Make sure you follow the right met
-1. Open a terminal on your machine and run the following commands, making sure to replace `USERNAME`
- with your own GitHub username:
+1. Open a terminal on your machine and run the following commands, making sure
+ to replace `USERNAME` with your own GitHub username:
```shell
git clone https://github.com/USERNAME/splashkit.io-starlight.git
@@ -69,7 +74,8 @@ SplashKit Core cloning is slightly different! Make sure you follow the right met
1. Open a new VSCode window.
-2. Open the command palette by pressing `cmd + shift + p` (or `ctrl + shift + p` on Windows/Linux).
+2. Open the command palette by pressing `cmd + shift + p` (or `ctrl + shift + p`
+ on Windows/Linux).
3. Type `git clone` and paste the URL of your forked repo.

@@ -78,7 +84,8 @@ SplashKit Core cloning is slightly different! Make sure you follow the right met

-5. Once the repo is cloned, VSCode will prompt you to open the repo folder location.
+5. Once the repo is cloned, VSCode will prompt you to open the repo folder
+ location.

@@ -91,13 +98,13 @@ Now you're all set up to start working on the SplashKit.io repo in VSCode.
-1. Open GitHub Desktop and click on the `File` tab in the top-left corner, then select
- `Clone Repository`.
+1. Open GitHub Desktop and click on the `File` tab in the top-left corner, then
+ select `Clone Repository`.

-2. Here you can either filter via your existing repositories, find the forked repo, or paste the URL
- of the forked repo.
+2. Here you can either filter via your existing repositories, find the forked
+ repo, or paste the URL of the forked repo.

@@ -115,21 +122,22 @@ Open a WSL terminal and change directory to your home with:
cd splashkit-core
```
-Note that this guide clones the repository to the home directory, but feel free to move its
-location. Now initiate the clone process of your fork with:
+Note that this guide clones the repository to the home directory, but feel free
+to move its location. Now initiate the clone process of your fork with:
```shell
git clone --recursive -j2 https://github.com/{user name}/splashkit-core.git
```
-splashkit-core contains multiple submodules (separate repositories which splashkit-core depends
-upon). The `--recursive` argument ensures that the submodules are also downloaded when calling
-clone. Wait for the download to complete before continuing to the next step.
+splashkit-core contains multiple submodules (separate repositories which
+splashkit-core depends upon). The `--recursive` argument ensures that the
+submodules are also downloaded when calling clone. Wait for the download to
+complete before continuing to the next step.
@@ -387,21 +404,22 @@ troubleshooting steps you can take to resolve the most common issues.
1. #### Identify Conflicts
- During a rebase or merge, if Git detects conflicts that it cannot resolve automatically, it will
- pause the process and display a message indicating which files have conflicts. You can check the
- status to identify which files are in conflict:
+ During a rebase or merge, if Git detects conflicts that it cannot resolve
+ automatically, it will pause the process and display a message indicating
+ which files have conflicts. You can check the status to identify which files
+ are in conflict:
```shell
git status
```
- Git will show the files that are in conflict and need your attention. Conflicted files will
- appear under the `both modified` section.
+ Git will show the files that are in conflict and need your attention.
+ Conflicted files will appear under the `both modified` section.
2. #### Open the Conflicted Files
- Open the conflicted files in your preferred text editor (e.g., vscode). In the file, you will see
- conflict markers that look like this:
+ Open the conflicted files in your preferred text editor (e.g., vscode). In
+ the file, you will see conflict markers that look like this:
```plaintext
<<<<<<< HEAD
@@ -413,26 +431,27 @@ troubleshooting steps you can take to resolve the most common issues.
- **`HEAD`** contains the changes from your current branch.
- The **`=======`** separates the two conflicting versions.
- - The text below the `=======` represents changes from the branch you are merging or rebasing
- onto.
+ - The text below the `=======` represents changes from the branch you are
+ merging or rebasing onto.
3. #### Manually Resolve the Conflicts
- To resolve the conflict, decide whether to keep your changes, the incoming changes, or a
- combination of both.
- - **Keep your changes**: Delete the lines between `=======` and `>>>>>>>` and remove the conflict
- markers.
- - **Keep the incoming changes**: Delete the lines between `<<<<<<< HEAD` and `=======` and remove
- the conflict markers.
- - **Combine both changes**: Modify the conflicting section to include both sets of changes, based
- on your needs, and then remove the conflict markers.
+ To resolve the conflict, decide whether to keep your changes, the incoming
+ changes, or a combination of both.
+ - **Keep your changes**: Delete the lines between `=======` and `>>>>>>>` and
+ remove the conflict markers.
+ - **Keep the incoming changes**: Delete the lines between `<<<<<<< HEAD` and
+ `=======` and remove the conflict markers.
+ - **Combine both changes**: Modify the conflicting section to include both
+ sets of changes, based on your needs, and then remove the conflict markers.
- After resolving the conflicts, the file should look clean and without any conflict markers.
+ After resolving the conflicts, the file should look clean and without any
+ conflict markers.
4. #### Mark the Conflicts as Resolved
- Once you have resolved the conflicts in a file, you need to add the file to the staging area to
- let Git know that the conflict has been resolved:
+ Once you have resolved the conflicts in a file, you need to add the file to
+ the staging area to let Git know that the conflict has been resolved:
```shell
git add
@@ -455,8 +474,8 @@ troubleshooting steps you can take to resolve the most common issues.
git merge --continue
```
- If at any point you want to abort the rebase or merge due to complications, you can use the
- following command:
+ If at any point you want to abort the rebase or merge due to complications,
+ you can use the following command:
```shell
git rebase --abort
@@ -470,9 +489,10 @@ troubleshooting steps you can take to resolve the most common issues.
6. #### Push the Resolved Changes
- After resolving the conflicts and completing the rebase or merge, you will need to push the
- changes back to your remote repository. If you performed a rebase, you will need to force-push
- the branch since the commit history has been rewritten:
+ After resolving the conflicts and completing the rebase or merge, you will
+ need to push the changes back to your remote repository. If you performed a
+ rebase, you will need to force-push the branch since the commit history has
+ been rewritten:
```shell
git push --force-with-lease origin
@@ -486,12 +506,14 @@ troubleshooting steps you can take to resolve the most common issues.
7. #### Verify Everything
- After pushing, you can verify that everything is resolved and the history is clean:
+ After pushing, you can verify that everything is resolved and the history is
+ clean:
```shell
git log
```
- This will show the commit history, confirming that your conflicts were successfully resolved.
+ This will show the commit history, confirming that your conflicts were
+ successfully resolved.
diff --git a/src/content/docs/Products/SplashKit/04-pull-request.mdx b/src/content/docs/Products/SplashKit/04-pull-request.mdx
index 6a2ac5c24..0981fc084 100644
--- a/src/content/docs/Products/SplashKit/04-pull-request.mdx
+++ b/src/content/docs/Products/SplashKit/04-pull-request.mdx
@@ -1,6 +1,7 @@
---
title: How to Create a Pull Request
-description: This is a step-by-step guide on how to create a pull request for SplashKit.
+description:
+ This is a step-by-step guide on how to create a pull request for SplashKit.
sidebar:
label: "- Pull Request Guide"
order: 4
@@ -10,43 +11,48 @@ import { Steps, Aside } from "@astrojs/starlight/components";
## How to Create a Pull Request
-This guide provides a step-by-step process for creating a pull request (PR) in the SplashKit
-Starlight repository, PRs are the primary way to contribute changes to the project. By following
-these steps, you can submit your own PRs and collaborate with other team members effectively.
+This guide provides a step-by-step process for creating a pull request (PR) in
+the SplashKit Starlight repository, PRs are the primary way to contribute
+changes to the project. By following these steps, you can submit your own PRs
+and collaborate with other team members effectively.
1. ### Check for Upstream Branches
- Before creating a pull request, it's important to ensure that your local repository is connected
- to the correct upstream repository. The upstream repository is the original repository from which
- your fork was created. You need this connection to pull in the latest changes from the main
- project.
+ Before creating a pull request, it's important to ensure that your local
+ repository is connected to the correct upstream repository. The upstream
+ repository is the original repository from which your fork was created. You
+ need this connection to pull in the latest changes from the main project.
- To check if upstream branches are already linked to your local repository, run the following
- command:
+ To check if upstream branches are already linked to your local repository,
+ run the following command:
```shell
git remote -v
```
- This will display a list of remote repositories linked to your local repository. If the
- `upstream` branch is not listed, you will need to add it in the next step.
+ This will display a list of remote repositories linked to your local
+ repository. If the `upstream` branch is not listed, you will need to add it
+ in the next step.
2. ### Add Upstream Branches (if not present)
- If the upstream branch is not already added, you can manually add it to your local repository.
- This ensures you can fetch and merge changes from the main repository whenever necessary.
+ If the upstream branch is not already added, you can manually add it to your
+ local repository. This ensures you can fetch and merge changes from the main
+ repository whenever necessary.
- To add the upstream branch, run the following command, replacing `` with the actual
- name of the repository you're working with (e.g., `splashkit.io-starlight`).
+ To add the upstream branch, run the following command, replacing
+ `` with the actual name of the repository you're working with
+ (e.g., `splashkit.io-starlight`).
```shell
git remote add upstream https://github.com/thoth-tech/.git
```
#### Examples
- - For the `splashkit.io-starlight` repository, the command will look like this:
+ - For the `splashkit.io-starlight` repository, the command will look like
+ this:
```shell
git remote add upstream https://github.com/thoth-tech/splashkit.io-starlight.git
@@ -60,8 +66,8 @@ these steps, you can submit your own PRs and collaborate with other team members
3. ### Verify Upstream Branches
- After adding the upstream branch, verify that it has been added correctly by running the
- following command again:
+ After adding the upstream branch, verify that it has been added correctly by
+ running the following command again:
```shell
git remote -v
@@ -76,15 +82,17 @@ these steps, you can submit your own PRs and collaborate with other team members
upstream https://github.com/thoth-tech/splashkit.io-starlight.git (push)
```
- If the upstream branch is correctly listed, you are now ready to create your pull request.
+ If the upstream branch is correctly listed, you are now ready to create your
+ pull request.
## Sync Your Fork (Optional but Recommended)
-Before creating a pull request, it's good practice to sync your local fork with the upstream
-repository to ensure you're working with the latest version. Run the following commands to fetch and
-merge the latest changes from the upstream repository:
+Before creating a pull request, it's good practice to sync your local fork with
+the upstream repository to ensure you're working with the latest version. Run
+the following commands to fetch and merge the latest changes from the upstream
+repository:
```shell
git fetch upstream
@@ -92,12 +100,13 @@ git checkout main
git merge upstream/main
```
-This ensures that your pull request will not conflict with the latest updates made by others.
+This ensures that your pull request will not conflict with the latest updates
+made by others.
## Creating a Pull Request
-There are two primary ways to create a pull request: using the GitHub website or the GitHub Pull
-Requests extension in VSCode.
+There are two primary ways to create a pull request: using the GitHub website or
+the GitHub Pull Requests extension in VSCode.
### Using the GitHub Website
@@ -105,43 +114,47 @@ Requests extension in VSCode.
1. #### Open GitHub to Review the Pull Request
- Head to GitHub and navigate to your forked repository. Once there, click on the **Pull requests**
- tab at the top of the page, and then click the **New pull request** button.
+ Head to GitHub and navigate to your forked repository. Once there, click on
+ the **Pull requests** tab at the top of the page, and then click the **New
+ pull request** button.

2. #### Select the Correct Repository and Branches
Next, make sure you're comparing the correct branches:
- - **Base Repository**: This should be set to `thoth-tech/repo_name` (the original
+ - **Base Repository**: This should be set to `thoth-tech/repo_name` (the
+ original
- repository you're contributing to).
- **Base Branch**: Select `main` as the branch to merge into.
:::caution[Usage Examples Base Branch]
- For usage example tasks, the base branch (destination branch) needs to be changed to be
- **usage-examples**.
+ For usage example tasks, the base branch (destination branch) needs to be
+ changed to be **usage-examples**.
:::
- The other dropdown should show your forked repository and the branch you want to merge from.
+ The other dropdown should show your forked repository and the branch you want
+ to merge from.

- Ensure these settings are correct to avoid submitting changes to the wrong branch or repository.
+ Ensure these settings are correct to avoid submitting changes to the wrong
+ branch or repository.
3. #### Review Your Changes
- GitHub will display a comparison of the changes between your branch and the `main` branch of the
- upstream repository. This is your opportunity to double-check the modifications you're proposing
- to merge.
+ GitHub will display a comparison of the changes between your branch and the
+ `main` branch of the upstream repository. This is your opportunity to
+ double-check the modifications you're proposing to merge.
Make sure everything looks correct before proceeding.
:::caution[Usage Example Files]
- Ensure you have the files listed below added, with their names updated to the python signature of
- the function you are demonstrating.
+ Ensure you have the files listed below added, with their names updated to the
+ python signature of the function you are demonstrating.
- For **standard** Usage Example PRs:
```plaintext
@@ -157,22 +170,23 @@ Requests extension in VSCode.
4. #### Add Pull Request Details
- When you create a pull request, you'll need to provide some additional information using a pull
- request template. This helps reviewers understand the context of your changes. Make sure to:
+ When you create a pull request, you'll need to provide some additional
+ information using a pull request template. This helps reviewers understand
+ the context of your changes. Make sure to:
- Provide a clear and descriptive title for your pull request.
- - Fill out the required fields in the template, such as the purpose of the changes, testing
- steps, and any additional notes.
+ - Fill out the required fields in the template, such as the purpose of the
+ changes, testing steps, and any additional notes.

- Be as detailed as possible. This makes it easier for reviewers to understand your contribution
- and provide feedback.
+ Be as detailed as possible. This makes it easier for reviewers to understand
+ your contribution and provide feedback.
5. #### Submit the Pull Request
- Once you've filled out the template and confirmed your changes, click the **Create pull request**
- button. Your pull request will now be submitted and visible to the repository maintainers and
- reviewers for feedback.
+ Once you've filled out the template and confirmed your changes, click the
+ **Create pull request** button. Your pull request will now be submitted and
+ visible to the repository maintainers and reviewers for feedback.
@@ -180,22 +194,24 @@ Requests extension in VSCode.
Alternatively, you can use the
[GitHub Pull Requests extension](https://marketplace.visualstudio.com/items?itemName=GitHub.vscode-pull-request-github)
-for VSCode. This allows you to create pull requests directly from your code editor.
+for VSCode. This allows you to create pull requests directly from your code
+editor.
1. #### Open the Extension
- In VSCode, click on the GitHub Pull Requests icon in the sidebar. If you don't see it, you can
- install it from the Extensions Marketplace.
+ In VSCode, click on the GitHub Pull Requests icon in the sidebar. If you
+ don't see it, you can install it from the Extensions Marketplace.
2. #### Create the Pull Request
- Click the **Create Pull Request** button, which will give you the option to select the branch you
- want to merge from (your working branch) and the branch you want to merge into (usually `main`).
+ Click the **Create Pull Request** button, which will give you the option to
+ select the branch you want to merge from (your working branch) and the branch
+ you want to merge into (usually `main`).
- Follow the same steps as on the GitHub website to review your changes, fill out the pull request
- template, and submit it.
+ Follow the same steps as on the GitHub website to review your changes, fill
+ out the pull request template, and submit it.

@@ -203,17 +219,19 @@ for VSCode. This allows you to create pull requests directly from your code edit
## Next Steps After Submitting a Pull Request
-Once your pull request is submitted, move the associated Planner card to the **First Peer Review**
-column in your project management tool, and share both the pull request and the Planner card with
-your team or peer reviewers. Follow the information on the
-[Planner Board Etiquette](/products/splashkit/07-planner-board) page to ensure a smooth review
-process.
+Once your pull request is submitted, move the associated Planner card to the
+**First Peer Review** column in your project management tool, and share both the
+pull request and the Planner card with your team or peer reviewers. Follow the
+information on the
+[Planner Board Etiquette](/products/splashkit/07-planner-board) page to ensure a
+smooth review process.
-Keep an eye out for feedback from the reviewer, and be prepared to make changes if necessary.
+Keep an eye out for feedback from the reviewer, and be prepared to make changes
+if necessary.
### Useful Links
-- [Pull Request Template](/products/splashkit/05-pull-request-template): The template for creating a
- pull request for SplashKit team.
-- [Peer Review Guide](/products/splashkit/06-peer-review): The guide on how to perform a peer review
- within the SplashKit team.
+- [Pull Request Template](/products/splashkit/05-pull-request-template): The
+ template for creating a pull request for SplashKit team.
+- [Peer Review Guide](/products/splashkit/06-peer-review): The guide on how to
+ perform a peer review within the SplashKit team.
diff --git a/src/content/docs/Products/SplashKit/05-pull-request-template.mdx b/src/content/docs/Products/SplashKit/05-pull-request-template.mdx
index d3ba99b4e..0b2ffeacd 100644
--- a/src/content/docs/Products/SplashKit/05-pull-request-template.mdx
+++ b/src/content/docs/Products/SplashKit/05-pull-request-template.mdx
@@ -1,6 +1,7 @@
---
title: Pull Request Template
-description: This is a template for creating a pull request for SplashKit Website.
+description:
+ This is a template for creating a pull request for SplashKit Website.
sidebar:
label: "- Pull Request Template"
order: 5
@@ -8,15 +9,17 @@ sidebar:
import { Tabs, TabItem } from "@astrojs/starlight/components";
-Most SplashKit repos have a default pull request template that you can use. Usage Example PRs will
-need to use the template below.
+Most SplashKit repos have a default pull request template that you can use.
+Usage Example PRs will need to use the template below.
:::note
-The templates all include checklists of items that you need to complete before submitting your pull
-request, some of which may not be relevant to your specific pull request.
+The templates all include checklists of items that you need to complete before
+submitting your pull request, some of which may not be relevant to your specific
+pull request.
-Please ensure that you complete all the relevant items before submitting your pull request.
+Please ensure that you complete all the relevant items before submitting your
+pull request.
:::
@@ -26,7 +29,8 @@ See the templates in the tabs below:
-Remove the default template in the pull request, and instead, use the following template:
+Remove the default template in the pull request, and instead, use the following
+template:
{/* prettier-ignore-start */}
@@ -178,5 +182,7 @@ _Please describe the tests that you ran to verify your changes. Provide instruct
-Once submitted, move the associated planner card to peer review and link the pull request. Follow
-the [Planner Board Etiquette](/products/splashkit/07-planner-board) for more details on the process.
+Once submitted, move the associated planner card to peer review and link the
+pull request. Follow the
+[Planner Board Etiquette](/products/splashkit/07-planner-board) for more details
+on the process.
diff --git a/src/content/docs/Products/SplashKit/06-peer-review.mdx b/src/content/docs/Products/SplashKit/06-peer-review.mdx
index e6f17e382..d60d72b00 100644
--- a/src/content/docs/Products/SplashKit/06-peer-review.mdx
+++ b/src/content/docs/Products/SplashKit/06-peer-review.mdx
@@ -11,66 +11,73 @@ import { Tabs, TabItem, Steps, Aside } from "@astrojs/starlight/components";
-In SplashKit, peer reviews are an essential part of maintaining high-quality code. The Peer-Review
-Checklist provided below is required for every pull request and ensures that all contributions meet
-a consistent standard across the project. This checklist covers essential aspects like code quality,
+In SplashKit, peer reviews are an essential part of maintaining high-quality
+code. The Peer-Review Checklist provided below is required for every pull
+request and ensures that all contributions meet a consistent standard across the
+project. This checklist covers essential aspects like code quality,
functionality, and testing.
-However, we recognize that every feature or task is different, and it’s difficult to capture all
-potential review points in a single checklist. That’s why we’ve also included a set of Peer-Review
-Prompts. These prompts are not mandatory but serve as a resource to guide the peer-review
-discussion. Since peer reviews should always be collaborative, these prompts help ensure that the
-review process is conversational and thorough, encouraging reviewers to think critically and explore
-areas that may not be immediately obvious.
+However, we recognize that every feature or task is different, and it’s
+difficult to capture all potential review points in a single checklist. That’s
+why we’ve also included a set of Peer-Review Prompts. These prompts are not
+mandatory but serve as a resource to guide the peer-review discussion. Since
+peer reviews should always be collaborative, these prompts help ensure that the
+review process is conversational and thorough, encouraging reviewers to think
+critically and explore areas that may not be immediately obvious.
-Remember, the goal of peer reviews is not only to verify the quality of the code but also to foster
-a collaborative environment where we improve together.
+Remember, the goal of peer reviews is not only to verify the quality of the code
+but also to foster a collaborative environment where we improve together.
## How to Perform a Peer Review
-To maintain code quality and ensure smooth integration of new features, it’s essential to follow
-these steps when reviewing a PR in the SplashKit Starlight repository.
+To maintain code quality and ensure smooth integration of new features, it’s
+essential to follow these steps when reviewing a PR in the SplashKit Starlight
+repository.
1. ### Check for Upstream Branches
- Start by verifying whether the upstream branches are already added to your local repository. This
- is necessary to ensure that you can fetch PRs from the original repository for review.
+ Start by verifying whether the upstream branches are already added to your
+ local repository. This is necessary to ensure that you can fetch PRs from the
+ original repository for review.
```shell
git remote -v
```
- If the output does not show `upstream` linked to the main repository, you’ll need to add it in
- the next step.
+ If the output does not show `upstream` linked to the main repository, you’ll
+ need to add it in the next step.
2. ### Add Upstream Branches (if not present)
- If the upstream branch is missing, add it manually. Replace `` with the exact name of
- the repository.
+ If the upstream branch is missing, add it manually. Replace ``
+ with the exact name of the repository.
```shell
git remote add upstream https://github.com/thoth-tech/.git
@@ -92,13 +99,15 @@ these steps when reviewing a PR in the SplashKit Starlight repository.
git remote -v
```
- You should see both `origin` (your fork) and `upstream` (the main project repository) listed.
+ You should see both `origin` (your fork) and `upstream` (the main project
+ repository) listed.
4. ### Pull the PR into a New Branch
- To review a PR, you will fetch it into a new local branch. Locate the ID/number of the PR on
- GitHub, and use this number in the following command. Replace `ID` with the PR number and
- `PR-branch-name` with a name that represents the PR purpose.
+ To review a PR, you will fetch it into a new local branch. Locate the
+ ID/number of the PR on GitHub, and use this number in the following command.
+ Replace `ID` with the PR number and `PR-branch-name` with a name that
+ represents the PR purpose.
```shell
git fetch upstream pull/ID/head:PR-branch-name
@@ -121,20 +130,23 @@ these steps when reviewing a PR in the SplashKit Starlight repository.
6. ### Review the Code
Now that you are on the PR branch, start by reviewing the code to check for:
- - **Code Quality**: Confirm that the code aligns with the project’s coding standards and
- guidelines. Look for clean, well-organised, and readable code.
- - **Functionality**: Verify that the changes achieve the intended purpose and work as described.
- - **Testing**: Check for the presence of adequate tests, including unit and integration tests
- where necessary.
- - **Documentation**: Ensure any new features or updates are documented, with clear comments for
- any complex sections.
-
- Refer to the pull request template as you go through these checks to confirm that all required
- fields are covered.
+ - **Code Quality**: Confirm that the code aligns with the project’s coding
+ standards and guidelines. Look for clean, well-organised, and readable
+ code.
+ - **Functionality**: Verify that the changes achieve the intended purpose and
+ work as described.
+ - **Testing**: Check for the presence of adequate tests, including unit and
+ integration tests where necessary.
+ - **Documentation**: Ensure any new features or updates are documented, with
+ clear comments for any complex sections.
+
+ Refer to the pull request template as you go through these checks to confirm
+ that all required fields are covered.
### SplashKit Pull Request Templates
- Use this checklist as a reference to ensure you’re covering all necessary areas in your review.
+ Use this checklist as a reference to ensure you’re covering all necessary
+ areas in your review.
@@ -143,21 +155,23 @@ these steps when reviewing a PR in the SplashKit Starlight repository.
```markdown
# Description
- Please include a summary of the changes and the related issue. Please also include relevant
- motivation and context. List any dependencies that are required for this change.
+ Please include a summary of the changes and the related issue. Please also
+ include relevant motivation and context. List any dependencies that are
+ required for this change.
## Type of change
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- - [ ] Breaking change (fix or feature that would cause existing functionality to not work as
- expected)
+ - [ ] Breaking change (fix or feature that would cause existing functionality
+ to not work as expected)
- [ ] Documentation (update or new)
## How Has This Been Tested?
- Please describe the tests that you ran to verify your changes. Provide instructions so we can
- reproduce. Please also list any relevant details for your test configuration.
+ Please describe the tests that you ran to verify your changes. Provide
+ instructions so we can reproduce. Please also list any relevant details for
+ your test configuration.
- [ ] Tested in latest Chrome
- [ ] Tested in latest Firefox
@@ -214,17 +228,18 @@ these steps when reviewing a PR in the SplashKit Starlight repository.
## Code Quality
- - [ ] Repository: Is this Pull Request is made to the correct repository? (Thoth-Tech NOT
- SplashKit)
- - [ ] Readability: Is the code easy to read and follow? If not are there comments to help
- understand the code?
- - [ ] Maintainability: Can this code be easily maintained or extended in the future?
+ - [ ] Repository: Is this Pull Request is made to the correct repository?
+ (Thoth-Tech NOT SplashKit)
+ - [ ] Readability: Is the code easy to read and follow? If not are there
+ comments to help understand the code?
+ - [ ] Maintainability: Can this code be easily maintained or extended in the
+ future?
## Functionality
- [ ] Correctness: Does the code meet the requirements of the task?
- - [ ] Impact on Existing Functionality: Has the impact on existing functionality been considered
- and tested?
+ - [ ] Impact on Existing Functionality: Has the impact on existing
+ functionality been considered and tested?
## Testing
@@ -233,12 +248,14 @@ these steps when reviewing a PR in the SplashKit Starlight repository.
## Documentation
- - [ ] Documentation: Are both inline and applicable external documentation updated and clear?
+ - [ ] Documentation: Are both inline and applicable external documentation
+ updated and clear?
## Pull Request Details
- [ ] PR Description: Is the problem being solved clearly described?
- - [ ] Checklist Completion: Have all relevant checklist items been reviewed and completed?
+ - [ ] Checklist Completion: Have all relevant checklist items been reviewed
+ and completed?
```
@@ -246,137 +263,161 @@ these steps when reviewing a PR in the SplashKit Starlight repository.
#### Splashkit Review Prompts
- - **Type of Change**: Does this Pull Request correctly identify the type of change (bug fix, new
- feature, breaking change, or documentation update)? Is it aligned with the stated issue or
- task?
-
- - **Code Readability**: Is the code structure clean and easy to follow? Could it benefit from
- clearer variable names, additional comments, or better organization? Would this code be
- understandable for a new developer joining the project?
-
- - **Maintainability**: How maintainable is the code? Is it modular and easy to extend in the
- future? Does it avoid creating technical debt? Is the codebase as simple as possible while
- still accomplishing the task?
-
- - **Code Simplicity**: Are there any overly complex or redundant sections in the code? Could they
- be refactored for better simplicity or clarity? Does the code follow established design
- patterns and best practices?
-
- - **Edge Cases**: Does the implementation consider potential edge cases? What could go wrong with
- this code in unusual or unexpected scenarios? Are there any cases that haven’t been fully
- addressed?
-
- - **Test Thoroughness**: Are all key scenarios (including edge cases and failure paths) covered
- by tests? Could additional tests help ensure the reliability of the code? Has the code been
- tested across different environments (e.g., multiple browsers or platforms)?
-
- - **Backward Compatibility**: Does this change break any existing functionality? If so, has
- backward compatibility been handled or documented appropriately? Are there any warnings or
- notes in the documentation regarding compatibility?
-
- - **Performance Considerations**: Could this code have a negative impact on performance? Have any
- performance concerns been documented and tested? Could the code be optimized for better
- efficiency without sacrificing readability?
-
- - **Security Concerns**: Could this change introduce security vulnerabilities, especially in
- terms of input validation or sensitive data handling? Have security best practices been
- followed? Does this code ensure proper user data handling?
-
- - **Dependencies**: Are the new dependencies truly necessary? Could they create conflicts or
- issues down the line, particularly during upgrades or with other libraries in the project? Is
- there a simpler way to achieve the same functionality without adding new dependencies?
-
- - **Documentation**: Is the documentation clear and complete for both internal developers and
- external users? Could a new developer understand how to use or modify this feature from the
- documentation provided? Does it cover any API or external interface changes?
+ - **Type of Change**: Does this Pull Request correctly identify the type of
+ change (bug fix, new feature, breaking change, or documentation update)? Is
+ it aligned with the stated issue or task?
+
+ - **Code Readability**: Is the code structure clean and easy to follow? Could
+ it benefit from clearer variable names, additional comments, or better
+ organization? Would this code be understandable for a new developer joining
+ the project?
+
+ - **Maintainability**: How maintainable is the code? Is it modular and easy
+ to extend in the future? Does it avoid creating technical debt? Is the
+ codebase as simple as possible while still accomplishing the task?
+
+ - **Code Simplicity**: Are there any overly complex or redundant sections in
+ the code? Could they be refactored for better simplicity or clarity? Does
+ the code follow established design patterns and best practices?
+
+ - **Edge Cases**: Does the implementation consider potential edge cases? What
+ could go wrong with this code in unusual or unexpected scenarios? Are there
+ any cases that haven’t been fully addressed?
+
+ - **Test Thoroughness**: Are all key scenarios (including edge cases and
+ failure paths) covered by tests? Could additional tests help ensure the
+ reliability of the code? Has the code been tested across different
+ environments (e.g., multiple browsers or platforms)?
+
+ - **Backward Compatibility**: Does this change break any existing
+ functionality? If so, has backward compatibility been handled or documented
+ appropriately? Are there any warnings or notes in the documentation
+ regarding compatibility?
+
+ - **Performance Considerations**: Could this code have a negative impact on
+ performance? Have any performance concerns been documented and tested?
+ Could the code be optimized for better efficiency without sacrificing
+ readability?
+
+ - **Security Concerns**: Could this change introduce security
+ vulnerabilities, especially in terms of input validation or sensitive data
+ handling? Have security best practices been followed? Does this code ensure
+ proper user data handling?
+
+ - **Dependencies**: Are the new dependencies truly necessary? Could they
+ create conflicts or issues down the line, particularly during upgrades or
+ with other libraries in the project? Is there a simpler way to achieve the
+ same functionality without adding new dependencies?
+
+ - **Documentation**: Is the documentation clear and complete for both
+ internal developers and external users? Could a new developer understand
+ how to use or modify this feature from the documentation provided? Does it
+ cover any API or external interface changes?
7. ### Test the Changes Locally
- After the code review, run the project locally to verify that the new feature or bug fix works as
- expected. This can include:
+ After the code review, run the project locally to verify that the new feature
+ or bug fix works as expected. This can include:
- Running any test suites that come with the project.
- - Manually checking if the new functionality behaves correctly and does not introduce any bugs.
+ - Manually checking if the new functionality behaves correctly and does not
+ introduce any bugs.
- Ensuring the changes do not break other parts of the project.
8. ### Provide Constructive Feedback
- After reviewing and testing, leave constructive feedback directly on the PR on GitHub. Highlight
- both positive aspects and areas for improvement.
+ After reviewing and testing, leave constructive feedback directly on the PR
+ on GitHub. Highlight both positive aspects and areas for improvement.
- Use specific comments on code lines or sections where changes are required.
- - Make sure to explain why a change is needed to help the author learn and understand.
- - Be courteous and professional, focusing on improving the code and maintaining high project
- standards.
+ - Make sure to explain why a change is needed to help the author learn and
+ understand.
+ - Be courteous and professional, focusing on improving the code and
+ maintaining high project standards.
9. ### Approve or Request Changes
Once you’ve completed your review:
- - **Approve** if everything meets the project’s standards and the code works as expected.
- - **Request Changes** if the code requires adjustments before it can be merged. Clearly outline
- the changes required.
+ - **Approve** if everything meets the project’s standards and the code works
+ as expected.
+ - **Request Changes** if the code requires adjustments before it can be
+ merged. Clearly outline the changes required.
- In both cases, document your decision and leave detailed notes to assist the author.
+ In both cases, document your decision and leave detailed notes to assist the
+ author.
10. ### Update Planner Board Status
- Following the [Planner Board Etiquette](/products/splashkit/07-planner-board), move the
- associated Planner card to the next column based on the review outcome. If the PR is approved,
- update the card’s status accordingly, and if you requested changes, mark it for revision.
+ Following the
+ [Planner Board Etiquette](/products/splashkit/07-planner-board), move the
+ associated Planner card to the next column based on the review outcome. If
+ the PR is approved, update the card’s status accordingly, and if you
+ requested changes, mark it for revision.
- By following this guide, you’ll ensure a thorough and professional review process, helping
- maintain the quality and reliability of the SplashKit Starlight project.
+ By following this guide, you’ll ensure a thorough and professional review
+ process, helping maintain the quality and reliability of the SplashKit
+ Starlight project.
### Review Guidelines for Specific File Types
-Different file types require different levels of attention during the review process. Here's what to
-look for when reviewing each type of file:
+Different file types require different levels of attention during the review
+process. Here's what to look for when reviewing each type of file:
#### `.mdx` Files
-- **Content Accuracy**: Ensure that the content is clear and accurate. Double-check for any errors
- in the documentation or guides.
-- **Frontmatter**: Ensure the frontmatter (`title`, `description`, etc.) is correctly filled out.
-- **Component Usage**: Verify that components such as `LinkCard`, `CardGrid`, or others are being
- used appropriately within the `.mdx` files.
+- **Content Accuracy**: Ensure that the content is clear and accurate.
+ Double-check for any errors in the documentation or guides.
+- **Frontmatter**: Ensure the frontmatter (`title`, `description`, etc.) is
+ correctly filled out.
+- **Component Usage**: Verify that components such as `LinkCard`, `CardGrid`, or
+ others are being used appropriately within the `.mdx` files.
#### `.css` Files
-- **Consistency**: Check that the styles align with the **Styling Guide** and maintain a consistent
- use of variables (e.g., colours, fonts, spacing).
-- **Accessibility**: Review for accessibility considerations, such as whether animations are
- disabled for users who prefer reduced motion, and whether contrast ratios meet **WCAG 2.1 AA**
- standards.
-- **Naming Conventions**: Ensure that CSS class names follow a consistent naming pattern.
+- **Consistency**: Check that the styles align with the **Styling Guide** and
+ maintain a consistent use of variables (e.g., colours, fonts, spacing).
+- **Accessibility**: Review for accessibility considerations, such as whether
+ animations are disabled for users who prefer reduced motion, and whether
+ contrast ratios meet **WCAG 2.1 AA** standards.
+- **Naming Conventions**: Ensure that CSS class names follow a consistent naming
+ pattern.
#### `.jsx`/`.tsx` Files
-- **Functionality**: Make sure the interactive components (e.g., sliders, forms) work as expected
- and meet the requirements of the task.
-- **Performance**: Look for unnecessary re-renders or other performance concerns.
-- **Code Style**: Ensure the code follows **React/JSX** best practices and any project-specific
- linting rules.
+- **Functionality**: Make sure the interactive components (e.g., sliders, forms)
+ work as expected and meet the requirements of the task.
+- **Performance**: Look for unnecessary re-renders or other performance
+ concerns.
+- **Code Style**: Ensure the code follows **React/JSX** best practices and any
+ project-specific linting rules.
#### `.astro` Files
-- **Structure**: Ensure the page or component is well-structured and follows the **Astro standards**
- for component and page creation.
-- **Reusability**: Look for opportunities to refactor repetitive code into reusable components.
+- **Structure**: Ensure the page or component is well-structured and follows the
+ **Astro standards** for component and page creation.
+- **Reusability**: Look for opportunities to refactor repetitive code into
+ reusable components.
---
## Useful Resources for Reviewers
-- **Starlight Documentation**: [Starlight Docs](https://starlight.astro.build/getting-started/)
-- **Astro Documentation**: [Astro Docs](https://docs.astro.build/en/getting-started/)
-- **WCAG 2.1 AA Guidelines**: [W3C Accessibility Standards](https://www.w3.org/WAI/WCAG21/quickref/)
-- **MDN CSS Documentation**: [MDN CSS Guide](https://developer.mozilla.org/en-US/docs/Web/CSS)
-- **React Documentation**: [React Official Docs](https://reactjs.org/docs/getting-started.html)
+- **Starlight Documentation**:
+ [Starlight Docs](https://starlight.astro.build/getting-started/)
+- **Astro Documentation**:
+ [Astro Docs](https://docs.astro.build/en/getting-started/)
+- **WCAG 2.1 AA Guidelines**:
+ [W3C Accessibility Standards](https://www.w3.org/WAI/WCAG21/quickref/)
+- **MDN CSS Documentation**:
+ [MDN CSS Guide](https://developer.mozilla.org/en-US/docs/Web/CSS)
+- **React Documentation**:
+ [React Official Docs](https://reactjs.org/docs/getting-started.html)
- **Usage Example Styling Guide**:
[Style Guide](/products/splashkit/documentation/splashkit-website/usage-examples/05-usage-example-style-guide)
---
-By following these guidelines, you'll ensure that the SplashKit website project maintains high
-standards of code quality, performance, and accessibility. Remember, peer reviews are not only about
-verifying the code but also about learning and improving together as a team.
+By following these guidelines, you'll ensure that the SplashKit website project
+maintains high standards of code quality, performance, and accessibility.
+Remember, peer reviews are not only about verifying the code but also about
+learning and improving together as a team.
diff --git a/src/content/docs/Products/SplashKit/07-planner-board.mdx b/src/content/docs/Products/SplashKit/07-planner-board.mdx
index 2b1954b9a..0d8dd510d 100644
--- a/src/content/docs/Products/SplashKit/07-planner-board.mdx
+++ b/src/content/docs/Products/SplashKit/07-planner-board.mdx
@@ -10,60 +10,64 @@ import { Steps } from "@astrojs/starlight/components";
## Proper Planner Board Etiquete
-The planner board is where all tasks are tracked. You can find tasks to claim and work on, or add
-your own tasks that you will complete. Here are some guidelines to ensure smooth teamwork and
-efficient use of the planner board.
+The planner board is where all tasks are tracked. You can find tasks to claim
+and work on, or add your own tasks that you will complete. Here are some
+guidelines to ensure smooth teamwork and efficient use of the planner board.
1. ### Claiming a Task
- **Commit to work:** Only claim a task if you are ready to work on it.
- - **Unclaim if needed:** If you are unable to proceed with a task you've claimed, unclaim it so
- others can take over.
- - **Update status:** Once you claim a task, move it to the "Doing" column to signal that it's
- being actively worked on.
+ - **Unclaim if needed:** If you are unable to proceed with a task you've
+ claimed, unclaim it so others can take over.
+ - **Update status:** Once you claim a task, move it to the "Doing" column to
+ signal that it's being actively worked on.
2. ### Adding a Task
- - **Be clear and concise:** When adding a task, provide a meaningful title and a detailed
- description.
- - **Add checklists:** If the task involves multiple steps, include a checklist to outline them
- clearly.
- - **Use appropriate tags:** Tag the task with relevant labels to categorise it properly, such as
- `Tutorials` if it's tutorial based, or `usage examples` if it's a usage example.
+ - **Be clear and concise:** When adding a task, provide a meaningful title
+ and a detailed description.
+ - **Add checklists:** If the task involves multiple steps, include a
+ checklist to outline them clearly.
+ - **Use appropriate tags:** Tag the task with relevant labels to categorise
+ it properly, such as `Tutorials` if it's tutorial based, or
+ `usage examples` if it's a usage example.
3. ### Moving Tasks
- - **Include relevant links:** When completing a task, attach links to the pull request (PR) and
- any other relevant information.
- - **Add a completion comment:** Leave a comment on the task card with the date you completed the
- task.
- - **Move to Peer Review:** After completing a task, move it to the "First Peer Review" column so
- a team member can review it.
+ - **Include relevant links:** When completing a task, attach links to the
+ pull request (PR) and any other relevant information.
+ - **Add a completion comment:** Leave a comment on the task card with the
+ date you completed the task.
+ - **Move to Peer Review:** After completing a task, move it to the "First
+ Peer Review" column so a team member can review it.
> **Need help with pull requests?**
- > Follow the [How to Create a Pull Request](/products/splashkit/04-pull-request) guide for
- > detailed instructions.
+ > Follow the
+ > [How to Create a Pull Request](/products/splashkit/04-pull-request) guide
+ > for detailed instructions.
4. ### First Peer Review
- **Follow the review process:** Adhere to the steps outlined in the
[Peer Review Guide](/products/splashkit/06-peer-review).
- - **Request changes if needed:** Provide feedback and request changes if required.
- - **Approval:** Once the task meets the standards, approve it and the PR, then move the task to
- the "Second Peer Review" column.
- - **Leave a comment:** Add a comment with the date and confirmation that you've approved the
- task.
+ - **Request changes if needed:** Provide feedback and request changes if
+ required.
+ - **Approval:** Once the task meets the standards, approve it and the PR,
+ then move the task to the "Second Peer Review" column.
+ - **Leave a comment:** Add a comment with the date and confirmation that
+ you've approved the task.
5. ### Second Peer Review
- - **Follow similar steps:** Conduct the second peer review following the same guidelines as the
- first.
- - **Mentor Review:** After approving the PR, move it to the appropriate "Mentor Review" column.
- - **Comment on approval:** As before, leave a comment with the date and a note indicating you've
- approved the task for mentor review.
+ - **Follow similar steps:** Conduct the second peer review following the same
+ guidelines as the first.
+ - **Mentor Review:** After approving the PR, move it to the appropriate
+ "Mentor Review" column.
+ - **Comment on approval:** As before, leave a comment with the date and a
+ note indicating you've approved the task for mentor review.
6. ### Mentor Review
- **Final review:** The mentor will review the task and provide feedback.
- - **Request changes:** If changes are needed, the mentor will request them and move the task back
- to the "doing" column.
- - **Approval:** Once the mentor approves the task, they will merge the PR and move the task to
- the "completed" column.
+ - **Request changes:** If changes are needed, the mentor will request them
+ and move the task back to the "doing" column.
+ - **Approval:** Once the mentor approves the task, they will merge the PR and
+ move the task to the "completed" column.
diff --git a/src/content/docs/Products/SplashKit/08-tips-and-tricks.mdx b/src/content/docs/Products/SplashKit/08-tips-and-tricks.mdx
index bfcc3e3b8..3716877de 100644
--- a/src/content/docs/Products/SplashKit/08-tips-and-tricks.mdx
+++ b/src/content/docs/Products/SplashKit/08-tips-and-tricks.mdx
@@ -1,6 +1,8 @@
---
title: Tips and Tricks
-description: Some tips and tricks that can improve user experience when contributing to SplashKit
+description:
+ Some tips and tricks that can improve user experience when contributing to
+ SplashKit
sidebar:
label: "- Tips and Tricks"
order: 8
@@ -10,38 +12,43 @@ import { Steps } from "@astrojs/starlight/components";
## General tips and tricks to make your life easier
-This guide will provide some useful tips and tricks that have been useful for me while working on
-SplashKit.
+This guide will provide some useful tips and tricks that have been useful for me
+while working on SplashKit.
## Setting Git Bash terminal as default in VS Code
-By setting the terminal to Git Bash in VS Code you can compile and run your code without having to
-open a new terminal and changing directory to your working directory.
+By setting the terminal to Git Bash in VS Code you can compile and run your code
+without having to open a new terminal and changing directory to your working
+directory.
To do this:
1. Open VS Code
-2. Open a new terminal by pressing `` Ctrl + Shift + ` `` or Terminal tab -> new terminal
- 
+2. Open a new terminal by pressing `` Ctrl + Shift + ` `` or Terminal tab -> new
+ terminal 
3. In the terminal, locate the small drop down arrow to the top right

-4. In the drop down menu click "Select Default Profile" 
-5. You should see a drop down menu at the top of the window with a list of terminals where you can
- select Git Bash or your terminal of choice. 
-6. Select the terminal you would like as the default terminal within VS Code. I recommend Git Bash
+4. In the drop down menu click "Select Default Profile"
+ 
+5. You should see a drop down menu at the top of the window with a list of
+ terminals where you can select Git Bash or your terminal of choice.
+ 
+6. Select the terminal you would like as the default terminal within VS Code. I
+ recommend Git Bash
-By following these steps, VS Code will automatically open an embedded terminal in your working
-directory. No more alt-tabbing when building your SplashKit projects.
+By following these steps, VS Code will automatically open an embedded terminal
+in your working directory. No more alt-tabbing when building your SplashKit
+projects.
## Setting up aliases in your Bash terminal
-Constantly typing the same lengthy commands can become tedious so by setting up aliases you can save
-yourself a bunch of time. Whether it is shortened git commands or a compilation command for C++
-aliases are a great time saver.
+Constantly typing the same lengthy commands can become tedious so by setting up
+aliases you can save yourself a bunch of time. Whether it is shortened git
+commands or a compilation command for C++ aliases are a great time saver.
To create Aliases:
@@ -52,15 +59,17 @@ To create Aliases:
3. In the text editor create the alias you want to use as such:
- `alias name='command'`
-4. Once you have added the aliases you wish to use, to exit the nano editor and save changes press
- `Ctrl + X`
-5. When prompted to save changes press `Y` and then `enter`. This should close the editor and take
- you back to the terminal
-6. Now to apply the changes enter the command `source ~/.bashrc` or restart your terminal
+4. Once you have added the aliases you wish to use, to exit the nano editor and
+ save changes press `Ctrl + X`
+5. When prompted to save changes press `Y` and then `enter`. This should close
+ the editor and take you back to the terminal
+6. Now to apply the changes enter the command `source ~/.bashrc` or restart your
+ terminal
-Now when you enter your alias it should run the command you have set. Some Aliases I use are:
+Now when you enter your alias it should run the command you have set. Some
+Aliases I use are:
| Aliases | Purpose |
| ------------------------------------------------- | --------------------------------------------------------------- |
@@ -68,12 +77,14 @@ Now when you enter your alias it should run the command you have set. Some Alias
| `alias gcm='git checkout main'` | quickly get to main branch |
| `alias skcompile='skm clang++ \*.cpp -o a && ./a` | quickly compiles and runs C++ SplashKit projects in one command |
-These are some example aliases but you can do pretty much anything. If there are any commands you
-find yourself typing and think "Man this is tedious" just make a new alias for it!
+These are some example aliases but you can do pretty much anything. If there are
+any commands you find yourself typing and think "Man this is tedious" just make
+a new alias for it!
## Useful VS Code Extensions
-These are some helpful extensions that you can add to VS Code that may help with your workflow
+These are some helpful extensions that you can add to VS Code that may help with
+your workflow
| Extension | Description |
| ------------------- | --------------------------------------------------------------------------- |
@@ -86,8 +97,8 @@ These are some helpful extensions that you can add to VS Code that may help with
## Enable format on save
-Enabling format on save will allow your formatting extensions like Intellisense or prettier to
-format your code automatically when you save the document.
+Enabling format on save will allow your formatting extensions like Intellisense
+or prettier to format your code automatically when you save the document.
To enable this:
diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Bugs/car-race-clipping-bug.md b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Bugs/car-race-clipping-bug.md
index 7585542be..b4acb6d95 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Bugs/car-race-clipping-bug.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Bugs/car-race-clipping-bug.md
@@ -4,9 +4,9 @@ title: Car Race Clipping with Non-Player Cars
## Bug Description
-While palying the game, multiple (non- player) cars can spawn in the same lane, and bnecomes
-noticeable when the (non-player) cars have different movement speeds. The faster (non-player) car
-will then phase through the slower car.
+While palying the game, multiple (non- player) cars can spawn in the same lane,
+and bnecomes noticeable when the (non-player) cars have different movement
+speeds. The faster (non-player) car will then phase through the slower car.
## Testing Environment
@@ -14,8 +14,8 @@ This bug was found while on a windows 10 laptop.
## Reproduction
-Play Car Race untile you identify an instance of multiple non player cars sharing the same lane,
-with one phasing through the other.
+Play Car Race untile you identify an instance of multiple non player cars
+sharing the same lane, with one phasing through the other.
## Expected Results
@@ -23,4 +23,5 @@ Non-Player cars do not phase through or collide with eachother.
## Actual Results
-Non-Player cars can phase through eachother if multiple cars spawn in the same lane.
+Non-Player cars can phase through eachother if multiple cars spawn in the same
+lane.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Bugs/dxball-game-controls-bug.md b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Bugs/dxball-game-controls-bug.md
index c32c1833c..5e2f440a3 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Bugs/dxball-game-controls-bug.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Bugs/dxball-game-controls-bug.md
@@ -4,10 +4,11 @@ title: DXBallGame Unable to Interact With User Interface Bug
## Bug Description
-While attempting to play DXBallGame, the user inteface does not respond to any key presses. The game
-does react to keys 1, 8, 9, and 0, though these are likely intended to be used only for testing and
-debugging purposes, as they move/skip the user to specific screens, which are still static screens,
-as the user interface does not respond to any other key presses.
+While attempting to play DXBallGame, the user inteface does not respond to any
+key presses. The game does react to keys 1, 8, 9, and 0, though these are likely
+intended to be used only for testing and debugging purposes, as they move/skip
+the user to specific screens, which are still static screens, as the user
+interface does not respond to any other key presses.
## Testing Environment
@@ -15,13 +16,15 @@ This bug was found while on a windows 10 laptop.
## Reproduction
-Build and attempt to play the DXBallGame. The user interface will not respond to key presses.
+Build and attempt to play the DXBallGame. The user interface will not respond to
+key presses.
## Expected Results
-The user is able to properly inteact with the user interface through the use of their controls.
+The user is able to properly inteact with the user interface through the use of
+their controls.
## Actual Results
-The user is unable to interact with te user interface, as the game does not react to the key presses
-of the user.
+The user is unable to interact with te user interface, as the game does not
+react to the key presses of the user.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Bugs/pingpong-paddle-collisions-bug.md b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Bugs/pingpong-paddle-collisions-bug.md
index 70ab543cc..6345700b5 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Bugs/pingpong-paddle-collisions-bug.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Bugs/pingpong-paddle-collisions-bug.md
@@ -4,11 +4,12 @@ title: Pingpong Problematic Paddle and Puck Collisions Bug
## Bug Description
-The collision between the player paddles and the puck do not always work as intended and will
-sometimes allow the puck to phase through the players paddle. This is most common when the puck
-impacts the top or bottom of the players paddle, as the puck will phase/move through the entire
-length of the paddle as if it were a pipe. The puck can also phase through the paddles when
-impacting the side of the paddle, though i have not been able to identify any trend.
+The collision between the player paddles and the puck do not always work as
+intended and will sometimes allow the puck to phase through the players paddle.
+This is most common when the puck impacts the top or bottom of the players
+paddle, as the puck will phase/move through the entire length of the paddle as
+if it were a pipe. The puck can also phase through the paddles when impacting
+the side of the paddle, though i have not been able to identify any trend.
## Testing Environment
@@ -16,16 +17,17 @@ This bug was found while on a windows 10 laptop.
## Reproduction
-Play a pingpong game and attempt to have the puck impact to top or bottom edge of the paddle. If the
-collision is right, the puck should phase through the height of the paddle. It is also possible,
-though much less common, for the puck to impact the side of the paddle and still phase through.
+Play a pingpong game and attempt to have the puck impact to top or bottom edge
+of the paddle. If the collision is right, the puck should phase through the
+height of the paddle. It is also possible, though much less common, for the puck
+to impact the side of the paddle and still phase through.
## Expected Results
-The paddle acts as a wall that will redirect the puck such that the puck will bounce off the paddle
-andtravel back in the direction it came from.
+The paddle acts as a wall that will redirect the puck such that the puck will
+bounce off the paddle andtravel back in the direction it came from.
## Actual Results
-The puck will intermittently phase through the paddle, while playing the collision sound on repeat
-while the puck is still inside the paddle.
+The puck will intermittently phase through the paddle, while playing the
+collision sound on repeat while the puck is still inside the paddle.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Bugs/pingpong-playspace-collision-bug.md b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Bugs/pingpong-playspace-collision-bug.md
index 6f04d90c7..b50dedc49 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Bugs/pingpong-playspace-collision-bug.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Bugs/pingpong-playspace-collision-bug.md
@@ -4,9 +4,10 @@ title: Pingpong Incorrect Play Area Boundary Bug
## Bug Description
-The collision boundary of the play area does not match the visal boundary for the play area. Player
-1 is unable to access the last centimeter or so of their (visual) play space closest to their goal,
-while player 2 is able to access the last centimeter of their play space closest to their goal.
+The collision boundary of the play area does not match the visal boundary for
+the play area. Player 1 is unable to access the last centimeter or so of their
+(visual) play space closest to their goal, while player 2 is able to access the
+last centimeter of their play space closest to their goal.
## Testing Environment
@@ -14,13 +15,16 @@ This bug was found while on a windows 10 laptop.
## Reproduction
-Play a pingpong game and have player 1 move towards their goal, as far as the game allows. Have
-player 2 do the same and compare their visual distance to their goals.
+Play a pingpong game and have player 1 move towards their goal, as far as the
+game allows. Have player 2 do the same and compare their visual distance to
+their goals.
## Expected Results
-Both players are able to move within the last centimeter of play space closest to their goals.
+Both players are able to move within the last centimeter of play space closest
+to their goals.
## Actual Results
-Only player two is able to move within the last centimeter of play space closest to their goal.
+Only player two is able to move within the last centimeter of play space closest
+to their goal.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Improvement Suggestions/below-the-surface-enemy-recolour.md b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Improvement Suggestions/below-the-surface-enemy-recolour.md
index b43bac91e..5059b9cd2 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Improvement Suggestions/below-the-surface-enemy-recolour.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Improvement Suggestions/below-the-surface-enemy-recolour.md
@@ -4,18 +4,20 @@ title: Below the Surface Enemy Colours
## Improvement Suggestion Description
-The colour scheme of the cockroach enemy should be changed to a brighter colour. The colour scheme
-of the giant rat final boss should also be reconsidered.
+The colour scheme of the cockroach enemy should be changed to a brighter colour.
+The colour scheme of the giant rat final boss should also be reconsidered.
## Reasoning
-Currently, the cockroach emeny has a dark brown colour scheme, which makes it difficult to identify
-because the background is a combination of a dark grey bottom half and a somewhat lighter shad of
-grey for the top half. The dark brown colour of the cockroach has very little contrast with the dark
-grey of the background and is thus extremely difficult to spot.
+Currently, the cockroach emeny has a dark brown colour scheme, which makes it
+difficult to identify because the background is a combination of a dark grey
+bottom half and a somewhat lighter shad of grey for the top half. The dark brown
+colour of the cockroach has very little contrast with the dark grey of the
+background and is thus extremely difficult to spot.
-While the lighter grey of the top half of the background does alleviate this issue slightly, the
-cockroach is still very difficult to identify.
+While the lighter grey of the top half of the background does alleviate this
+issue slightly, the cockroach is still very difficult to identify.
-The rat final boss suffers from a similar issue, as the vast majority of the boss is a dark brown,
-though the teeth, tip of the tail, and hands/feet are a different, more easily identifed colour.
+The rat final boss suffers from a similar issue, as the vast majority of the
+boss is a dark brown, though the teeth, tip of the tail, and hands/feet are a
+different, more easily identifed colour.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Improvement Suggestions/runner-dash-enemy-movement-changes.md b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Improvement Suggestions/runner-dash-enemy-movement-changes.md
index 5e96229e3..4df9d3922 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Improvement Suggestions/runner-dash-enemy-movement-changes.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Improvement Suggestions/runner-dash-enemy-movement-changes.md
@@ -4,21 +4,23 @@ title: Runner Dash Enemy Movement Changes
## Improvement Suggestion Description
-There should be a set of options regarding enemy movement, to allow for difficulty adjustment and
-player choice. These options should include whether the enemy has 8 movement directions (north,
-south east, west, and diagonals) or 4 (north, south east, and west), whether the enemy can skip its
-turn, how the game decides when the enemy will skip its turn (random chance vs predictable pattern),
-and enemy count.
+There should be a set of options regarding enemy movement, to allow for
+difficulty adjustment and player choice. These options should include whether
+the enemy has 8 movement directions (north, south east, west, and diagonals) or
+4 (north, south east, and west), whether the enemy can skip its turn, how the
+game decides when the enemy will skip its turn (random chance vs predictable
+pattern), and enemy count.
## Reasoning
-Currently, the enemy will always outmaneuver the player due to it having 8 movement directions
-(north, south east, west, and diagonals), while the player only has 4 (north, south east, and west).
-This means that the only way to survive is to flee the enemy in a straight line until the enemy
-skips a turn.
+Currently, the enemy will always outmaneuver the player due to it having 8
+movement directions (north, south east, west, and diagonals), while the player
+only has 4 (north, south east, and west). This means that the only way to
+survive is to flee the enemy in a straight line until the enemy skips a turn.
-Only when the enemy skips a turn, can the player stop to plan their apprach for collecting gems.
+Only when the enemy skips a turn, can the player stop to plan their apprach for
+collecting gems.
-Reducing the enemy movement directions to match the players options can be a fix, but must be
-implemened alongside other changes, because this change alone would mean that the enemy can nver
-catch up to the player.
+Reducing the enemy movement directions to match the players options can be a
+fix, but must be implemened alongside other changes, because this change alone
+would mean that the enemy can nver catch up to the player.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Improvement Suggestions/venture-adventure-restart-level-option.md b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Improvement Suggestions/venture-adventure-restart-level-option.md
index 1d2e2f033..b4eb0a907 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Improvement Suggestions/venture-adventure-restart-level-option.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/Improvement Suggestions/venture-adventure-restart-level-option.md
@@ -8,12 +8,13 @@ Implement an option for the player to restart or exit the level.
## Reasoning
-There should be an option of the player to restart or exit the level, as it is currently possible
-for the player to move the boxes to locations that render it impossible for the player to collect
-one or more gems.
+There should be an option of the player to restart or exit the level, as it is
+currently possible for the player to move the boxes to locations that render it
+impossible for the player to collect one or more gems.
-Since you must collect all gems to move onto the next level, the player is then unable to progress
-to the next level.
+Since you must collect all gems to move onto the next level, the player is then
+unable to progress to the next level.
-Currently, the only way to restart or exit the level while being in a situation where you are unable
-to collect one or more gems is to exit out of the game entirely and reopen the program.
+Currently, the only way to restart or exit the level while being in a situation
+where you are unable to collect one or more gems is to exit out of the game
+entirely and reopen the program.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/arcade-game-bug-testing-spike-plan.md b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/arcade-game-bug-testing-spike-plan.md
index db37513e1..6f1e9ea67 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/arcade-game-bug-testing-spike-plan.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/Bugs and Improvements/arcade-game-bug-testing-spike-plan.md
@@ -4,14 +4,16 @@ title: Arcade Game Bug Testing Spike Plan
## Context
-The team needs information regarding the issues and areas for improvement regarding the arcade
-games, as there is limited information regarding bugs and flaws in the various arcade games.
+The team needs information regarding the issues and areas for improvement
+regarding the arcade games, as there is limited information regarding bugs and
+flaws in the various arcade games.
## Goals/Deliverables
The goals/deliverables are as follows
-1. Play the various arcade games and identify any bugs and flaws that should be improve on.
+1. Play the various arcade games and identify any bugs and flaws that should be
+ improve on.
2. Write out the various bugs and improvements in written documents.
Planned Start Date 20/11/2023
@@ -21,5 +23,6 @@ Deadline 24/11/2023
## Planning Notes
1. Build and play the arcade games, one at a time.
-2. When a bug or issue is encountered, attempt to gather information regarding the bug/issue.
+2. When a bug or issue is encountered, attempt to gather information regarding
+ the bug/issue.
3. Write down the details regarding the bugs/issues in separate documents.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/games-contribution-guide.md b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/games-contribution-guide.md
index 942c2bd3d..4efabdbbc 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/games-contribution-guide.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Games/games-contribution-guide.md
@@ -2,8 +2,8 @@
title: Guide to Contribute a Game to the Arcade Machine
---
-This guide takes you through the steps required for your game to be added into the arcade-machine
-library.
+This guide takes you through the steps required for your game to be added into
+the arcade-machine library.
- [Coding](#coding)
- [Quit Request](#quit-request)
@@ -18,13 +18,13 @@ library.
## Coding
-To make the game accessible and controllable by the arcade machine, some additional code or changes
-are required.
+To make the game accessible and controllable by the arcade machine, some
+additional code or changes are required.
### Quit Request
-Your game must be able to be exited using the escape key. This can be achieved by including the
-following command in your main loop.
+Your game must be able to be exited using the escape key. This can be achieved
+by including the following command in your main loop.
```cpp
int main()
@@ -38,19 +38,20 @@ int main()
### Window Size
-The window size of your game cannot exceed 1600 x 900, this is to allow your game to sit neatly
-inside the arcade-machine itself. Similarly, there is a minimum window size of 640 x 480, to ensure
-visibility for the user.
+The window size of your game cannot exceed 1600 x 900, this is to allow your
+game to sit neatly inside the arcade-machine itself. Similarly, there is a
+minimum window size of 640 x 480, to ensure visibility for the user.
-The window size of your game cannot exceed 1600 x 900, this is to allow your game to sit neatly
-inside the arcade-machine itself. Similarly, but mainly for aesthetic purposes, a minimum window
-size of 640 x 480 is expected.
+The window size of your game cannot exceed 1600 x 900, this is to allow your
+game to sit neatly inside the arcade-machine itself. Similarly, but mainly for
+aesthetic purposes, a minimum window size of 640 x 480 is expected.
### Window Border
-We ask that you remove the border before compiling your game. The Arcade Machine provides a more
-immersive experience for the user if there is no border. To remove the border of your game window,
-use SplashKit’s `window_toggle_border();` function after the `open_window()` function like so:
+We ask that you remove the border before compiling your game. The Arcade Machine
+provides a more immersive experience for the user if there is no border. To
+remove the border of your game window, use SplashKit’s `window_toggle_border();`
+function after the `open_window()` function like so:
```cpp
int main()
@@ -76,24 +77,27 @@ A preview of your game will be shown in the Arcade Machine games menu.
(TBA - Please include an image of your game)
-This image must be sized as 600px x 540px so it will be displayed correctly in the games menu. The
-supported formats are `png`, `jpg` and `bmp`.
+This image must be sized as 600px x 540px so it will be displayed correctly in
+the games menu. The supported formats are `png`, `jpg` and `bmp`.
-If you don’t have access to image editing software such as Adobe Illustrator/Photoshop, we suggest
-you use a browser-based tool such as [resizeimage](https://resizeimage.net/) to resize, crop or
-format a screenshot of your game.
+If you don’t have access to image editing software such as Adobe
+Illustrator/Photoshop, we suggest you use a browser-based tool such as
+[resizeimage](https://resizeimage.net/) to resize, crop or format a screenshot
+of your game.
## Configuration
-Each game must have a configuration file containing information about the game. There is a
-`config.txt` file located in the base directory of the repository, copy this file into the base
-directory of your game file and fill it with your game information. It must match the example
-configuration file shown below, but with your game information.
+Each game must have a configuration file containing information about the game.
+There is a `config.txt` file located in the base directory of the repository,
+copy this file into the base directory of your game file and fill it with your
+game information. It must match the example configuration file shown below, but
+with your game information.

-The configuration file **must** be in text (`.txt`) format, and it must be named `config.txt`. This
-must be located in your games root directory, alongside your `program.cpp` (example below).
+The configuration file **must** be in text (`.txt`) format, and it must be named
+`config.txt`. This must be located in your games root directory, alongside your
+`program.cpp` (example below).

@@ -105,11 +109,13 @@ must be located in your games root directory, alongside your `program.cpp` (exam
Congratulations!
-You have now completed all the steps required to have your game showcased on the Arcade Machine.
+You have now completed all the steps required to have your game showcased on the
+Arcade Machine.
To contribute your game, go to the
-[Thoth Tech arcade-games repository](https://github.com/thoth-tech/arcade-games). Click the **Fork**
-button at the top right of the screen and create a fork of this repository.
+[Thoth Tech arcade-games repository](https://github.com/thoth-tech/arcade-games).
+Click the **Fork** button at the top right of the screen and create a fork of
+this repository.

@@ -117,7 +123,8 @@ You will now have the arcade-games repository in your personal Git.

-On your local, navigate to a desired file path and clone this repository using the bash command:
+On your local, navigate to a desired file path and clone this repository using
+the bash command:
```shell
git clone https://github.com//arcade-games.git
@@ -142,8 +149,8 @@ You will now see your game in the remote fork.
Now create a Pull request to have your game added to the arcade-machine.
-Click the **Pull requests** tab, then click **New pull request** button, then click **Create pull
-request**
+Click the **Pull requests** tab, then click **New pull request** button, then
+click **Create pull request**

@@ -151,7 +158,8 @@ Write a message for the Arcade Machine and hit **Create pull request**

-You will see that merging is blocked until a member of the Arcade-Machine team has reviewed your
-game. We will be sure to get in contact with you once it has been approved!
+You will see that merging is blocked until a member of the Arcade-Machine team
+has reviewed your game. We will be sure to get in contact with you once it has
+been approved!

diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Machine Setup/01-adding-games-to-arcade-machine.md b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Machine Setup/01-adding-games-to-arcade-machine.md
index ecd090bc9..1e8a025a7 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Machine Setup/01-adding-games-to-arcade-machine.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Machine Setup/01-adding-games-to-arcade-machine.md
@@ -2,8 +2,8 @@
title: Adding Games to Arcade Machine
---
-> many of these steps are eser to perform via the GUI all commands below are for CLI but can start
-> the GUI by typing `startx` in the terminal
+> many of these steps are eser to perform via the GUI all commands below are for
+> CLI but can start the GUI by typing `startx` in the terminal
## C++ Programs
@@ -60,8 +60,9 @@ title: Adding Games to Arcade Machine
~/Games/MyGame/MyGame
```
- - > Note: Some C++ programs may not run correctly when executed from a remote directory in which
- > case make the script chagne to the program directory first
+ - > Note: Some C++ programs may not run correctly when executed from a remote
+ > directory in which case make the script chagne to the program directory
+ > first
```
#!/bin/bash
@@ -119,9 +120,10 @@ git clone https://github.com/Thoth-Tech/MyGame.git
- you may need to change into a sub directory first
- > Compiling as a standalone program is presently required for C# games as dotnet and splashkit
- > paths are not loaded on CLI boot, paths are presently loaded by bashrc which only run on
- > interactive login shells. i.e. when you login to the desktop.
+ > Compiling as a standalone program is presently required for C# games as
+ > dotnet and splashkit paths are not loaded on CLI boot, paths are presently
+ > loaded by bashrc which only run on interactive login shells. i.e. when you
+ > login to the desktop.
```
skm dotnet publish --sc
diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Machine Setup/02-setup-arcade-machine.md b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Machine Setup/02-setup-arcade-machine.md
index a176ff52a..108b8dcc3 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Machine Setup/02-setup-arcade-machine.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Machine Setup/02-setup-arcade-machine.md
@@ -9,8 +9,8 @@ Download current arcade Image from here (accessible to Thoth-Tech Team Members):
SHA256 Hash `31f0ea11c8492000d003108bf84afbb261ad6ee7c1be989f52a2b4add9d8821e`
-Use a program like [etcher](https://etcher.balena.io/) to create a bootable USB or SD card with the
-Arcade image.
+Use a program like [etcher](https://etcher.balena.io/) to create a bootable USB
+or SD card with the Arcade image.
1. Open etcher
2. Select image
@@ -39,11 +39,11 @@ These are the Credentials setup on the image
## Connect to eduroam
-Two changes need to be made to allow the Pi to access the eduroam network. One to network interfaces
-and one to wpa_supplicant:
+Two changes need to be made to allow the Pi to access the eduroam network. One
+to network interfaces and one to wpa_supplicant:
-1. Modify /etc/network/interfaces to bring the wlan0 interface up automatically, use DHCP and read
- from the wpa_supplicant config
+1. Modify /etc/network/interfaces to bring the wlan0 interface up automatically,
+ use DHCP and read from the wpa_supplicant config
- From the console open the interfaces file:
```shell
@@ -89,8 +89,9 @@ and one to wpa_supplicant:
}
```
- - Replace **YOURUSERNAME** and **YOURPASSWORD** with the arcade machine's eduroam login
- credentials. Ensure you include the domain I.E. ""
+ - Replace **YOURUSERNAME** and **YOURPASSWORD** with the arcade machine's
+ eduroam login credentials. Ensure you include the domain I.E.
+ ""
- Press Ctrl+X to exit and press **y** when prompted to save your changes.
3. Reboot and test network connectivity
@@ -100,7 +101,8 @@ and one to wpa_supplicant:
sudo reboot
```
- - Test network connectivity by pinging an external site, for example Google's DNS:
+ - Test network connectivity by pinging an external site, for example Google's
+ DNS:
```shell
ping 8.8.8.8
@@ -110,14 +112,16 @@ and one to wpa_supplicant:
### 1. Install SplashKit
-- Follow the [Linux (Ubuntu) Installation Guide](https://splashkit.io/installation/linux/) on the
- SplashKit website.
-- Primarly perform steps 1 and 2, VS code is optional unless you whish to adjust programming on the
- PI directly.
+- Follow the
+ [Linux (Ubuntu) Installation Guide](https://splashkit.io/installation/linux/)
+ on the SplashKit website.
+- Primarly perform steps 1 and 2, VS code is optional unless you whish to adjust
+ programming on the PI directly.
### 2. Install .NET (dotnet)
-- You can refer to [this page](https://learn.microsoft.com/en-us/dotnet/iot/deployment) but these
+- You can refer to
+ [this page](https://learn.microsoft.com/en-us/dotnet/iot/deployment) but these
are the core commands:
1. Run this install script
@@ -182,8 +186,8 @@ and one to wpa_supplicant:
nano ~/.emulationstation/es_systems.cfg
```
-8. Add the following configuration code to the `es_systems.cfg` file: Or, you can download a copy
- from here:
+8. Add the following configuration code to the `es_systems.cfg` file: Or, you
+ can download a copy from here:
Click
to Download
@@ -327,13 +331,13 @@ and one to wpa_supplicant:
### Setup WiFi Access Point (Optional)
-This will set the PI up as a WiFi Access Point so you can SSH in when the USB ports are not
-accessible.
+This will set the PI up as a WiFi Access Point so you can SSH in when the USB
+ports are not accessible.
[Basic Guide Availble Here](https://gist.github.com/narate/d3f001c97e1c981a59f94cd76f041140)
-Enter the following commands, the SSID is set to Arcade1 - change the number for the machine you are
-working on.
+Enter the following commands, the SSID is set to Arcade1 - change the number for
+the machine you are working on.
```shell
nmcli con add type wifi ifname wlan0 con-name Hostspot autoconnect yes ssid Arcade1
@@ -343,10 +347,13 @@ nmcli con modify Hostspot wifi-sec.psk "GamesAreFun"
nmcli con up Hostspot
```
-The Pi will now be broadcasting a WiFi network called Arcade1 with the password GamesAreFun
+The Pi will now be broadcasting a WiFi network called Arcade1 with the password
+GamesAreFun
-- The IP address of the Pi will be 10.42.0.1/24 this current setup does not allow for DHCP so any
-- connecting client will need to manually set an IP address in use the following settings:
+- The IP address of the Pi will be 10.42.0.1/24 this current setup does not
+ allow for DHCP so any
+- connecting client will need to manually set an IP address in use the following
+ settings:
IP Address: 10.42.0.2 Subnet Mask: 255.255.255.0 Gateway: 10.42.0.1
@@ -382,18 +389,19 @@ The Pi will now be broadcasting a WiFi network called Arcade1 with the password
## Installing the Splashkit Theme
-1. Download the Themes folder located at: docs/Splashkit/Applications/Arcade Machines/Arcade Machine
- Setup/Files
+1. Download the Themes folder located at: docs/Splashkit/Applications/Arcade
+ Machines/Arcade Machine Setup/Files
-2. Copy the Themes folder into your .emulationstation folder, located at ~/.emulationstation on the
- Raspberry Pi or at %HOMEPATH%/.emulationstation on windows devices.
+2. Copy the Themes folder into your .emulationstation folder, located at
+ ~/.emulationstation on the Raspberry Pi or at %HOMEPATH%/.emulationstation on
+ windows devices.
-3. In the es_systems.cfg file, located in the file paths mentioned in step 2, you will need to
- change the XML code for the theme to be "sk."
+3. In the es_systems.cfg file, located in the file paths mentioned in step 2,
+ you will need to change the XML code for the theme to be "sk."

-4. Launch EmulationStation, open the start menu, and under UI settings change Theme set to
- "Splashkit."
+4. Launch EmulationStation, open the start menu, and under UI settings change
+ Theme set to "Splashkit."

diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Machine Setup/03-create-pi-image.md b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Machine Setup/03-create-pi-image.md
index 4a8737196..047a3f9a4 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Machine Setup/03-create-pi-image.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Arcade Machine Setup/03-create-pi-image.md
@@ -2,11 +2,12 @@
title: Creating Raspberry Pi Image
---
-This document will outline how to create a compressed disk image of the Rasberry Pi that can then be
-burnt to new SD cards using the [Raspberry Pi imager](https://www.raspberrypi.com/software/) or a
-program like [etcher](https://etcher.balena.io/). Using this process, you can make new Gold Images
-and backups of the running software for the Arcade Machines. This process uses a script called
-PiShirnk
+This document will outline how to create a compressed disk image of the Rasberry
+Pi that can then be burnt to new SD cards using the
+[Raspberry Pi imager](https://www.raspberrypi.com/software/) or a program like
+[etcher](https://etcher.balena.io/). Using this process, you can make new Gold
+Images and backups of the running software for the Arcade Machines. This process
+uses a script called PiShirnk
The process was derived from this article:
@@ -15,31 +16,33 @@ The process was derived from this article:
-The current Compressed Gold Image is on the Thoth Tech Teams SharePoint Site. This is persistent but
-only accessible to Thoth Tech team members.
+The current Compressed Gold Image is on the Thoth Tech Teams SharePoint Site.
+This is persistent but only accessible to Thoth Tech team members.
## Requirements
- USB Key larger capacity than current SD Card in Pi
- Raspberry Pi with Arcade image
-Note it is possible to change the partition sizes on the Pi to use a smaller USB key, but I have not
-tested that process, and it is beyond the scope of this document. If you need that process, please
-refer to the Toms Hardware article above, and if successful, please update this document with the
-additional optional process.
+Note it is possible to change the partition sizes on the Pi to use a smaller USB
+key, but I have not tested that process, and it is beyond the scope of this
+document. If you need that process, please refer to the Toms Hardware article
+above, and if successful, please update this document with the additional
+optional process.
## Create Disk Image
1. Format USB key
- - Format the USB Key as either NTFS (if using Windows) or EXT4 (if using Linux); I'm not sure
- what is best for Mac OS. (This Wiki How Article explains how to format a key on Windows
+ - Format the USB Key as either NTFS (if using Windows) or EXT4 (if using
+ Linux); I'm not sure what is best for Mac OS. (This Wiki How Article
+ explains how to format a key on Windows
)
1. Connect the USB key to the Pi

-1. On the Pi, open a terminal and run the following to install pishrink.sh and move it to
- /usr/local/bin
+1. On the Pi, open a terminal and run the following to install pishrink.sh and
+ move it to /usr/local/bin
```
wget https://raw.githubusercontent.com/Drewsif/PiShrink/master/pishrink.sh
@@ -55,12 +58,13 @@ additional optional process.

- You should be able to see the mount point for the USB ours has been mounted at
- `/media/deakin/Spare` (Spare is the volume name set during formatting)
+ You should be able to see the mount point for the USB ours has been mounted
+ at `/media/deakin/Spare` (Spare is the volume name set during formatting)
1. Copy the current SD card to the USB as an image file, i.e. my command was
- `sudo dd if=/dev/mmcblk0 of=/media/deakin/Spare/ArcadeImage-19.08.2023.img bs=1M` Set the
- filename as you see fit but if updating the gold image suggest using a date or version number.
+ `sudo dd if=/dev/mmcblk0 of=/media/deakin/Spare/ArcadeImage-19.08.2023.img bs=1M`
+ Set the filename as you see fit but if updating the gold image suggest using
+ a date or version number.
```
sudo dd if=/dev/mmcblk0 of=[mount point]/myimg.img bs=1M
@@ -78,6 +82,7 @@ additional optional process.
sudo pishrink.sh -z ArcadeImage-19.08.2023.img
```
-You should now have a compressed image file, i.e. `ArcadeImage-19.08.2023.img.gz` refer to
+You should now have a compressed image file, i.e.
+`ArcadeImage-19.08.2023.img.gz` refer to
[Setup Arcade Machine](/products/splashkit/documentation/arcade-machine/arcade-machine-setup/02-setup-arcade-machine)
for instructions on burning the image to a new SD card or USB.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Research and Findings/01-emulation-station-script-research.md b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Research and Findings/01-emulation-station-script-research.md
index f102cd5f8..b3fc56ae9 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Research and Findings/01-emulation-station-script-research.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Research and Findings/01-emulation-station-script-research.md
@@ -7,30 +7,34 @@ title: Emulation Station Script Research
1. In your ~homepath/.emulationstation folder create a new folder named scripts
2. within the scripts folder, using the table from
- (Under scripting > 2. Event
- directories), create a new folder with the name of the event you want the script to run for.
+ (Under
+ scripting > 2. Event directories), create a new folder with the name of the
+ event you want the script to run for.
-3. Within the event folder, you can place Shell Script files that you want to run on the event.
+3. Within the event folder, you can place Shell Script files that you want to
+ run on the event.
### Notes
-The scripts will need testing on either a raspberry pi or a Linux pc, as windows doesn't natively
-support running shell scripts.
+The scripts will need testing on either a raspberry pi or a Linux pc, as windows
+doesn't natively support running shell scripts.
-As shown by the table on , depending on
-the event, certain bits of data get passed along to the script.
+As shown by the table on
+, depending on the
+event, certain bits of data get passed along to the script.
## Ten Minute Idle Timer
-For creating the idle timer you will most likely need to have a script running from the game-start
-event, which passes down the %rom_path%, %rom_name%, and %game_name% arguments. The arcade machine
-isn't using emulators testing would need to see which of these arguments can call a close on the
-opened game.
+For creating the idle timer you will most likely need to have a script running
+from the game-start event, which passes down the %rom_path%, %rom_name%, and
+%game_name% arguments. The arcade machine isn't using emulators testing would
+need to see which of these arguments can call a close on the opened game.
### Test Hello World Script
-Below is a test script that will create a file containing the words HELLO WORLD, the `#!/bin/bash`
-line, known as a "shebang", gives the script elevated permissions.
+Below is a test script that will create a file containing the words HELLO WORLD,
+the `#!/bin/bash` line, known as a "shebang", gives the script elevated
+permissions.
```shell
#!/bin/bash
@@ -45,8 +49,9 @@ echo after
## Useful Links
-This is a stack exchange question looking into detecting inputs on a linux/unix device. The answers
-talk about the file paths for devices and gives a sample code using C.
+This is a stack exchange question looking into detecting inputs on a linux/unix
+device. The answers talk about the file paths for devices and gives a sample
+code using C.
- A forum post asking
-about some issues regarding a game-start script.
+
+A forum post asking about some issues regarding a game-start script.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Research and Findings/02-add-second-controller-findings.md b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Research and Findings/02-add-second-controller-findings.md
index 84f8d1258..68eb98d0a 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Research and Findings/02-add-second-controller-findings.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Research and Findings/02-add-second-controller-findings.md
@@ -4,56 +4,63 @@ title: Emulation Station - Allow both users to control emulationstation menus
## Background
-The arcade machine uses a bespoke controller for input. The controller hardware uses JoyToKey to
-remap joypad inputs to a emulated keyboard. Both players have separate physical controls, but the
-arcade sees this as a single unified keyboard.
+The arcade machine uses a bespoke controller for input. The controller hardware
+uses JoyToKey to remap joypad inputs to a emulated keyboard. Both players have
+separate physical controls, but the arcade sees this as a single unified
+keyboard.
-The user interface uses a program called emulationstation, which is in turn integrated into
-RetroPie.
+The user interface uses a program called emulationstation, which is in turn
+integrated into RetroPie.
-This has the effect of only allowing a single user to control the menu outside of games (in
-emulationstation). An enhancement request has been raised to add functionality where both players
-can control menus.
+This has the effect of only allowing a single user to control the menu outside
+of games (in emulationstation). An enhancement request has been raised to add
+functionality where both players can control menus.
### Findings
-- Linux is only able to interact with a single keyboard. It is not possible to have two hardware
- keyboards, and use both for input.
+- Linux is only able to interact with a single keyboard. It is not possible to
+ have two hardware keyboards, and use both for input.
-- The desired functionality **is** possible with two discrete controllers. If the user connects and
- configures two controllers to the arcade machine, both players will be able to interact with menus
- independent of each other. This has been tested locally with me using a PlayStation 4 &
- PlayStation 5 controller connected via USB.
- - It is also possible to use one keyboard and one controller, and have each device represent input
- for each player
+- The desired functionality **is** possible with two discrete controllers. If
+ the user connects and configures two controllers to the arcade machine, both
+ players will be able to interact with menus independent of each other. This
+ has been tested locally with me using a PlayStation 4 & PlayStation 5
+ controller connected via USB.
+ - It is also possible to use one keyboard and one controller, and have each
+ device represent input for each player
- RetroPie & EmulationStation handle input differently.
- - Emulationstation configs do not discriminate between player 1 and player 2 (or any number of
- players). Rather the config is just a series of buttons mapped to inputs.
+ - Emulationstation configs do not discriminate between player 1 and player 2
+ (or any number of players). Rather the config is just a series of buttons
+ mapped to inputs.
- Config file: `/home/$username/.emulationstation/es_input.cfg`
- - RetroPie **does** discriminate between players. Inputs are named player1_up, player1_down etc.
- - Creating additional lines in the config file for player2 controls does not affect
- Emulationstation I.E.
+ - RetroPie **does** discriminate between players. Inputs are named player1_up,
+ player1_down etc.
+ - Creating additional lines in the config file for player2 controls does not
+ affect Emulationstation I.E.
- `player1_button_up="a", player2_button_up="z"`
- Config file: `/opt/retropie/configs/all/retroarch.cfg`
-- Modifying the Retropie config file directly does not change the Emulationstation controls.
-- Modifying the EmulationStation config directly breaks input when the program is next started.
+- Modifying the Retropie config file directly does not change the
+ Emulationstation controls.
+- Modifying the EmulationStation config directly breaks input when the program
+ is next started.
- This can be fixed by resetting the config
- - Running the script declared in this config file does not appear to update the config either
+ - Running the script declared in this config file does not appear to update
+ the config either
- Shell script path:
`/opt/retropie/supplementary/emulationstation/scripts/inputconfiguration.sh`
- - In order to actually update the config it seems necessary to use the input confg wizard in the
- GUI.
+ - In order to actually update the config it seems necessary to use the input
+ confg wizard in the GUI.
### Emulationstation input config file
-Emulationstation holds configuration settings for user input devices in a file located here:
-`/home/$username/.emulationstation/es_input.cfg`. This is an XML formatted file. Each device type
-creates it's own "inputConfig" section. The `id` refers to the decimal
-[ASCII character code](https://www.ascii-code.com/). E.G lowercase "a" has a decimal value of 97
-(0x61).
+Emulationstation holds configuration settings for user input devices in a file
+located here: `/home/$username/.emulationstation/es_input.cfg`. This is an XML
+formatted file. Each device type creates it's own "inputConfig" section. The
+`id` refers to the decimal [ASCII character code](https://www.ascii-code.com/).
+E.G lowercase "a" has a decimal value of 97 (0x61).
An example config file looks like this:
@@ -100,30 +107,33 @@ Adding multiple "inputConfig" sections for keyboards does not work. I.E:
### Outcome
-It does not appear to be possible to implement this functionality given the current state of
-constituent systems & hardware limitations of the Linux kernel without significantly reworking other
-aspects of the Arcade Machine's hardware & software configuration.
+It does not appear to be possible to implement this functionality given the
+current state of constituent systems & hardware limitations of the Linux kernel
+without significantly reworking other aspects of the Arcade Machine's hardware &
+software configuration.
#### Main limiting factors are
-- Inability to address two keyboards concurrently - This is extremely unlikely to change, and very
- difficult to work around
-- Limitations in ability to modify EmulationStation configs to suit arcade machine needs
-- System configuration is brittle - manually editing many of RetroPie (or constituent) configs will
- silently break functionality
+- Inability to address two keyboards concurrently - This is extremely unlikely
+ to change, and very difficult to work around
+- Limitations in ability to modify EmulationStation configs to suit arcade
+ machine needs
+- System configuration is brittle - manually editing many of RetroPie (or
+ constituent) configs will silently break functionality
- Existing dependence on handling Arcade Machine input as an emulated keyboard
### Recommendations
-I would assess this enhancement as "low priority" in the grander scheme of the project. To be clear,
-dual user input is currently supported within SplashKit games themselves. This limitation only
-affects the EmulationStation overlay. Which essentially means that only player 1 has control over
-this menu.
+I would assess this enhancement as "low priority" in the grander scheme of the
+project. To be clear, dual user input is currently supported within SplashKit
+games themselves. This limitation only affects the EmulationStation overlay.
+Which essentially means that only player 1 has control over this menu.
-If this is deemed to be a requirement then I would recommend removing JoyToKey from the arcade
-machine and handling input directly as a joypad. This would also require bifurcating the two joypads
-into two discrete devices. As mentioned above, this would require a rework of several other systems
-to accommodate this change. In addition many SplashKit games would need to be reworked to handle
+If this is deemed to be a requirement then I would recommend removing JoyToKey
+from the arcade machine and handling input directly as a joypad. This would also
+require bifurcating the two joypads into two discrete devices. As mentioned
+above, this would require a rework of several other systems to accommodate this
+change. In addition many SplashKit games would need to be reworked to handle
joypad input, as most currently use Keyboard input only.
### Links
diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Research and Findings/03-below-the-surface-game-test-report-20Aug2023.md b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Research and Findings/03-below-the-surface-game-test-report-20Aug2023.md
index d8cff8a38..ab7198431 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Research and Findings/03-below-the-surface-game-test-report-20Aug2023.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Research and Findings/03-below-the-surface-game-test-report-20Aug2023.md
@@ -7,19 +7,20 @@ Date of Testing: [20/08/2023]
## Executive Summary
-This test report outlines findings from testing the arcade game "Below the Surface." The game
-exhibited a critical issue when played with two players, where the movement of one player caused the
-screen to move, leading to the other player's death when they moved off-screen. Additionally, a
-suggestion was made to restrict player movements when they are positioned beyond the screen's
-boundaries. The report discusses the problem's impact, provides reproduction steps, and suggests
-potential solutions.
+This test report outlines findings from testing the arcade game "Below the
+Surface." The game exhibited a critical issue when played with two players,
+where the movement of one player caused the screen to move, leading to the other
+player's death when they moved off-screen. Additionally, a suggestion was made
+to restrict player movements when they are positioned beyond the screen's
+boundaries. The report discusses the problem's impact, provides reproduction
+steps, and suggests potential solutions.
## Testing Goals
-- The primary objective of testing was to assess the gameplay experience of the arcade game "Below
- the Surface" when played with two players simultaneously.
-- The focus was on evaluating the game's screen movement behaviour on the arcade machine and
- identifying any issues that arise as a result.
+- The primary objective of testing was to assess the gameplay experience of the
+ arcade game "Below the Surface" when played with two players simultaneously.
+- The focus was on evaluating the game's screen movement behaviour on the arcade
+ machine and identifying any issues that arise as a result.
## Testing Environment
@@ -31,7 +32,8 @@ The following test cases were executed during the testing phase:
### Test Case 1: 2 players movement
-- Description: Play in pairs to experience the effects of player movements in 2 players mode
+- Description: Play in pairs to experience the effects of player movements in 2
+ players mode
1. Launch the game on the arcade machine with two players.
2. Begin gameplay, with both players moving in different directions.
@@ -41,48 +43,55 @@ The following test cases were executed during the testing phase:
### Issue: Screen Movement Causing Player Deaths
-- Description: When playing with two players, the movement of one player causes the screen to shift,
- resulting in the other player's death if they move off-screen.
+- Description: When playing with two players, the movement of one player causes
+ the screen to shift, resulting in the other player's death if they move
+ off-screen.
1. Launch the game on the arcade machine with two players.
2. Begin gameplay, with both players moving in different directions.
-3. Observe the screen movement causing one player to be left off-screen and subsequently dying.
-
-**Impact:** The issue severely hampers multiplayer gameplay, rendering it frustrating and
-unplayable.
-
-**Proposed Solution:** The movement of the screen needs to be adjusted, and deaths should not occur
-because of the movement of two players when playing. This ensures the core gameplay and fun of the
-game.
-
-- Improved Dual-Screen Mode: Modify the game's screen behaviour to split into two distinct views
- when two players move in opposite directions. This would allow each player to move independently
- without affecting the other's gameplay. However, this solution may require significant technical
- adjustments and testing.
-- Player Respawning Mechanism: Implement a respawning mechanism that triggers when a player moves
- off-screen. Upon detection of off-screen movement, the player would respawn at a safe location,
- ensuring they are not unfairly penalized for screen movement.
-- Restrict Movement Beyond Screen Boundaries: Restrict player movements when they are positioned
- beyond the screen's boundaries. This would prevent screen movement caused by one player's actions
- and allow both players to remain on the same screen.
+3. Observe the screen movement causing one player to be left off-screen and
+ subsequently dying.
+
+**Impact:** The issue severely hampers multiplayer gameplay, rendering it
+frustrating and unplayable.
+
+**Proposed Solution:** The movement of the screen needs to be adjusted, and
+deaths should not occur because of the movement of two players when playing.
+This ensures the core gameplay and fun of the game.
+
+- Improved Dual-Screen Mode: Modify the game's screen behaviour to split into
+ two distinct views when two players move in opposite directions. This would
+ allow each player to move independently without affecting the other's
+ gameplay. However, this solution may require significant technical adjustments
+ and testing.
+- Player Respawning Mechanism: Implement a respawning mechanism that triggers
+ when a player moves off-screen. Upon detection of off-screen movement, the
+ player would respawn at a safe location, ensuring they are not unfairly
+ penalized for screen movement.
+- Restrict Movement Beyond Screen Boundaries: Restrict player movements when
+ they are positioned beyond the screen's boundaries. This would prevent screen
+ movement caused by one player's actions and allow both players to remain on
+ the same screen.
## Suggestions and Feedback
-To address this issue and provide a seamless multiplayer experience, the following potential
-solutions are recommended:
+To address this issue and provide a seamless multiplayer experience, the
+following potential solutions are recommended:
-- When the player reaches full health, he cannot pick up heart-shaped items to increase health, but
- can instead increase score.
-- The colour of the cockroach shaped monster is too dim, and the dark background is easy to cause
- the player to lose sight of it.
-- The jump rate is too fast, making the player's countermeasures against the monsters too difficult.
+- When the player reaches full health, he cannot pick up heart-shaped items to
+ increase health, but can instead increase score.
+- The colour of the cockroach shaped monster is too dim, and the dark background
+ is easy to cause the player to lose sight of it.
+- The jump rate is too fast, making the player's countermeasures against the
+ monsters too difficult.
## Conclusion
-The critical issue of screen movement affecting multiplayer gameplay in the arcade game "Below the
-Surface" significantly detracts from the intended cooperative experience. The recommended solutions
-aim to resolve this issue and restore the game's multiplayer functionality. Implementing these
-solutions, along with the provided suggestions, will greatly enhance the overall enjoyment and
-engagement of players.
+The critical issue of screen movement affecting multiplayer gameplay in the
+arcade game "Below the Surface" significantly detracts from the intended
+cooperative experience. The recommended solutions aim to resolve this issue and
+restore the game's multiplayer functionality. Implementing these solutions,
+along with the provided suggestions, will greatly enhance the overall enjoyment
+and engagement of players.
---
diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Research and Findings/04-asteroids-game-test-report-20Aug2023.md b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Research and Findings/04-asteroids-game-test-report-20Aug2023.md
index 56acfc83f..88c415282 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Research and Findings/04-asteroids-game-test-report-20Aug2023.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/Research and Findings/04-asteroids-game-test-report-20Aug2023.md
@@ -7,10 +7,11 @@ Date of Testing: [20/08/2023]
## Executive Summary
-This report outlines the findings from the testing of the Asteroids. The testing process aimed to
-identify and address potential issues affecting gameplay, graphics, performance, user interface, and
-overall user experience. One significant issue identified is the excessive speed of meteorites,
-which has a notable impact on the game's difficulty and player engagement.
+This report outlines the findings from the testing of the Asteroids. The testing
+process aimed to identify and address potential issues affecting gameplay,
+graphics, performance, user interface, and overall user experience. One
+significant issue identified is the excessive speed of meteorites, which has a
+notable impact on the game's difficulty and player engagement.
## Testing Goals
@@ -36,71 +37,79 @@ The following test cases were executed during the testing phase:
2. Start a new game session.
3. Observe the speed at which meteorites move across the screen.
-- Expected Result: Meteorite speed is challenging yet manageable, allowing players to navigate
- effectively.
-- Actual Result: Meteorites move at an excessive speed, making it difficult for players to react and
- avoid collisions.
+- Expected Result: Meteorite speed is challenging yet manageable, allowing
+ players to navigate effectively.
+- Actual Result: Meteorites move at an excessive speed, making it difficult for
+ players to react and avoid collisions.
## Bugs/Issues
### Issue: Excessive Meteorite Speed
-- Description: During gameplay, it was observed that the meteorites move at an excessively high
- speed, leading to an increase in game difficulty beyond the intended level.
+- Description: During gameplay, it was observed that the meteorites move at an
+ excessively high speed, leading to an increase in game difficulty beyond the
+ intended level.
- Steps to Reproduce:
1. Launch the game.
2. Start a new game session.
3. Observe the speed at which meteorites move across the screen.
-- Notes: The rapid movement of meteorites makes it challenging for players to react and navigate
- effectively, impacting the overall gameplay experience. This issue is particularly noticeable on
- higher levels, where it becomes nearly impossible to avoid collisions due to the meteorites'
- speed.
+- Notes: The rapid movement of meteorites makes it challenging for players to
+ react and navigate effectively, impacting the overall gameplay experience.
+ This issue is particularly noticeable on higher levels, where it becomes
+ nearly impossible to avoid collisions due to the meteorites' speed.
-**Proposed Solution:** The meteorite speed needs to be adjusted to provide players with a fair and
-engaging gameplay experience. By reducing the speed to a more manageable level, players will have
-better control over their ship's movements and can effectively strategize to avoid collisions. This
-adjustment will maintain a challenging aspect while not compromising the core enjoyment of the game.
+**Proposed Solution:** The meteorite speed needs to be adjusted to provide
+players with a fair and engaging gameplay experience. By reducing the speed to a
+more manageable level, players will have better control over their ship's
+movements and can effectively strategize to avoid collisions. This adjustment
+will maintain a challenging aspect while not compromising the core enjoyment of
+the game.
**Testing Required:**
1. Implement adjustments to meteorite speed.
-2. Conduct thorough playtesting to ensure the new speed level provides an appropriate balance of
- challenge and enjoyment.
-3. Gather player feedback to validate the changes and assess whether the meteorite movement feels
- more balanced.
+2. Conduct thorough playtesting to ensure the new speed level provides an
+ appropriate balance of challenge and enjoyment.
+3. Gather player feedback to validate the changes and assess whether the
+ meteorite movement feels more balanced.
-**Impact:** This issue significantly affects the game's playability and enjoyment. Addressing the
-meteorite speed will enhance the overall experience and encourage players to engage with the game
-for longer periods. Failure to address this issue could result in player frustration, leading to
-reduced player retention and potential negative reviews.
+**Impact:** This issue significantly affects the game's playability and
+enjoyment. Addressing the meteorite speed will enhance the overall experience
+and encourage players to engage with the game for longer periods. Failure to
+address this issue could result in player frustration, leading to reduced player
+retention and potential negative reviews.
## Performance Evaluation
-Performance evaluation revealed no significant issues. The game maintained a stable frame rate even
-during intense gameplay moments, and load times were within acceptable limits.
+Performance evaluation revealed no significant issues. The game maintained a
+stable frame rate even during intense gameplay moments, and load times were
+within acceptable limits.
## User Experience (UX) Evaluation
-The overall user experience was positive. However, the excessive meteorite speed issue has
-negatively impacted the user experience by making the game overly challenging.
+The overall user experience was positive. However, the excessive meteorite speed
+issue has negatively impacted the user experience by making the game overly
+challenging.
## Suggestions and Feedback
- Address the excessive meteorite speed issue to enhance gameplay balance.
-- Visual Feedback: Provide clearer visual cues when the player's ship is hit by a meteorite. This
- can include visual effects, screen shakes, or sound effects to enhance the impact and improve
- feedback to the player.
-- Power-ups and Abilities: Consider introducing power-ups or special abilities that the player can
- acquire during gameplay. These could include temporary shields, slow motion, or increased
- firepower, adding variety and strategic depth to the gameplay.
+- Visual Feedback: Provide clearer visual cues when the player's ship is hit by
+ a meteorite. This can include visual effects, screen shakes, or sound effects
+ to enhance the impact and improve feedback to the player.
+- Power-ups and Abilities: Consider introducing power-ups or special abilities
+ that the player can acquire during gameplay. These could include temporary
+ shields, slow motion, or increased firepower, adding variety and strategic
+ depth to the gameplay.
## Conclusion
-The testing process has revealed several strengths of the Asteroids, including its intuitive
-controls and engaging gameplay. However, the issue of excessive meteorite speed poses a significant
-challenge to player enjoyment and needs to be promptly addressed. By implementing the proposed
-solution, the game can provide a more balanced and enjoyable experience for players.
+The testing process has revealed several strengths of the Asteroids, including
+its intuitive controls and engaging gameplay. However, the issue of excessive
+meteorite speed poses a significant challenge to player enjoyment and needs to
+be promptly addressed. By implementing the proposed solution, the game can
+provide a more balanced and enjoyable experience for players.
---
diff --git a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/index.mdx b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/index.mdx
index 8abf246a8..adbc80fb1 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/index.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Arcade Machine/index.mdx
@@ -7,27 +7,27 @@ sidebar:
:::note[Arcade Machines]
-See the [Arcade Machines](/products/splashkit/#arcade-machines) section of the overview page for
-more information.
+See the [Arcade Machines](/products/splashkit/#arcade-machines) section of the
+overview page for more information.
:::
:::note[Arcade Games Development]
-See the [Game Development](/products/splashkit/#game-development) section of the overview page for
-more information.
+See the [Game Development](/products/splashkit/#game-development) section of the
+overview page for more information.
:::
:::caution
-The files and folders in the Arcade Games section have been moved here from the "documentation"
-repo, and may be out of date.
+The files and folders in the Arcade Games section have been moved here from the
+"documentation" repo, and may be out of date.
-We hope to be able to improve the documentation on this site, related to the Arcade Machine and
-Games Development, in the near future.
+We hope to be able to improve the documentation on this site, related to the
+Arcade Machine and Games Development, in the near future.
-Join the team and come help improve the documentation a development processes for the Arcade
-Machine!
+Join the team and come help improve the documentation a development processes
+for the Arcade Machine!
:::
diff --git a/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/01-unit-testing-guide.mdx b/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/01-unit-testing-guide.mdx
index d627659d8..ee9df2bc0 100644
--- a/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/01-unit-testing-guide.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/01-unit-testing-guide.mdx
@@ -6,10 +6,11 @@ import { FileTree, Steps, Tabs, TabItem } from "@astrojs/starlight/components";
## Introduction
-SplashKit uses [Catch2 2.x](https://github.com/catchorg/Catch2/tree/v2.x) as a framework for unit
-tests.
+SplashKit uses [Catch2 2.x](https://github.com/catchorg/Catch2/tree/v2.x) as a
+framework for unit tests.
-Tests are written in C++ with the aid of macros from Catch2. Test files are located at:
+Tests are written in C++ with the aid of macros from Catch2. Test files are
+located at:
@@ -21,12 +22,13 @@ Tests are written in C++ with the aid of macros from Catch2. Test files are loca
-`unit_test_main.cpp` is the entry point for all unit tests. You do not need to modify this to write
-your own tests or update existing ones.
+`unit_test_main.cpp` is the entry point for all unit tests. You do not need to
+modify this to write your own tests or update existing ones.
-The `unit_test_.cpp` files contain tests for related parts of SplashKit. For example,
-`unit_test_utilities.cpp` has tests for SplashKit's utility functions. A test file must include the
-Catch2 header file along with any other includes required:
+The `unit_test_.cpp` files contain tests for related parts of SplashKit.
+For example, `unit_test_utilities.cpp` has tests for SplashKit's utility
+functions. A test file must include the Catch2 header file along with any other
+includes required:
```cpp
#include "catch.hpp"
@@ -34,7 +36,8 @@ Catch2 header file along with any other includes required:
## Writing a Unit Test
-At a minimum, a unit test consists of a `TEST_CASE` and an assertion (usually `REQUIRE`):
+At a minimum, a unit test consists of a `TEST_CASE` and an assertion (usually
+`REQUIRE`):
```cpp
TEST_CASE("gets the number of milliseconds that have passed since the program was started", "[current_ticks]")
@@ -44,13 +47,14 @@ TEST_CASE("gets the number of milliseconds that have passed since the program wa
}
```
-`TEST_CASE(name, [,tags])` defines a test case with the given name and, optionally, one or more
-tags.
+`TEST_CASE(name, [,tags])` defines a test case with the given name and,
+optionally, one or more tags.
-`REQUIRE` evaluates an expression and aborts the test as a failure if the result is false.
-`REQUIRE_FALSE` is similar but fails if the expression evaluates true. There are
-[other assertion macros](https://github.com/catchorg/Catch2/blob/v2.x/docs/assertions.md#top) but
-these are the most common.
+`REQUIRE` evaluates an expression and aborts the test as a failure if the result
+is false. `REQUIRE_FALSE` is similar but fails if the expression evaluates true.
+There are
+[other assertion macros](https://github.com/catchorg/Catch2/blob/v2.x/docs/assertions.md#top)
+but these are the most common.
A test may contain multiple assertions:
@@ -63,8 +67,9 @@ TEST_CASE("random number float between 0 and 1 is generated", "[rnd]")
}
```
-You may write tests that have some common steps, such as defining a variable. You can define one or
-more `SECTION(name)` inside a `TEST_CASE`. The `TEST_CASE` is run from the start for each `SECTION`.
+You may write tests that have some common steps, such as defining a variable.
+You can define one or more `SECTION(name)` inside a `TEST_CASE`. The `TEST_CASE`
+is run from the start for each `SECTION`.
```cpp
TEST_CASE("return a SplashKit resource of resource_kind with name filename as a string", "[file_as_string]")
@@ -93,8 +98,8 @@ TEST_CASE("return a SplashKit resource of resource_kind with name filename as a
}
```
-This test has three `SECTION`s, so the `TEST_CASE` will run three times. Each time, the `RESOURCE`
-and `RESOURCE_PATH` variables will be defined.
+This test has three `SECTION`s, so the `TEST_CASE` will run three times. Each
+time, the `RESOURCE` and `RESOURCE_PATH` variables will be defined.
## Building the test project
@@ -124,10 +129,11 @@ and `RESOURCE_PATH` variables will be defined.
Select the Default configure preset

In the CMake Tools extension click the button
-  next to Build and select
- skunit_tests
+  next to
+ Build and select skunit_tests

- Click the button  next to
+ Click the button
+  next to
Debug and select skunit_tests

@@ -141,8 +147,8 @@ and `RESOURCE_PATH` variables will be defined.
```
Or in VS Code:
- In the CMake Tools extension, click the Build button. The test project will also be built when
- you refresh tests on the Testing tab of VS Code.
+ In the CMake Tools extension, click the Build button. The test project will
+ also be built when you refresh tests on the Testing tab of VS Code.

@@ -161,10 +167,11 @@ and `RESOURCE_PATH` variables will be defined.
Select the Default configure preset

In the CMake Tools extension click the button
-  next to Build and select
- skunit_tests
+  next to
+ Build and select skunit_tests

- Click the button  next to
+ Click the button
+  next to
Debug and select skunit_tests

@@ -178,8 +185,8 @@ and `RESOURCE_PATH` variables will be defined.
```
Or in VS Code:
- In the CMake Tools extension, click Build. The test project will also be built when you refresh
- tests on the Testing tab of VS Code.
+ In the CMake Tools extension, click Build. The test project will also be
+ built when you refresh tests on the Testing tab of VS Code.

@@ -205,10 +212,11 @@ and `RESOURCE_PATH` variables will be defined.
Select the Default configure preset

In the CMake Tools extension click the button
-  next to Build and select
- skunit_tests
+  next to
+ Build and select skunit_tests

- Click the button  next to
+ Click the button
+  next to
Debug and select skunit_tests

@@ -222,8 +230,8 @@ and `RESOURCE_PATH` variables will be defined.
```
Or in VS Code:
- In the CMake Tools extension, click Build. The test project will also be built when you refresh
- tests on the Testing tab of VS Code.
+ In the CMake Tools extension, click Build. The test project will also be
+ built when you refresh tests on the Testing tab of VS Code.

@@ -237,16 +245,17 @@ and `RESOURCE_PATH` variables will be defined.
-- It's a good idea to run the unit tests in a random order so that you can confirm that they run
- indepedently of one another:
+- It's a good idea to run the unit tests in a random order so that you can
+ confirm that they run indepedently of one another:
```shell
cd ../../bin
./skunit_tests --order rand
```
- By default, this will only show reports for failed tests. To show reports for successful tests as
- well, use the option `--success`. More command line options can be found in
+ By default, this will only show reports for failed tests. To show reports for
+ successful tests as well, use the option `--success`. More command line
+ options can be found in
[Catch2's documentation](https://github.com/catchorg/Catch2/blob/v2.x/docs/command-line.md).
- If you want to run a specific test, or group of tests, you can do so:
@@ -255,22 +264,23 @@ and `RESOURCE_PATH` variables will be defined.
./skunit_tests
```
- The `test spec` can be a test name or tags and supports wildcards. For example, `*string*` would
- run all of the tests with "string" in the name.
+ The `test spec` can be a test name or tags and supports wildcards. For
+ example, `*string*` would run all of the tests with "string" in the name.
-- It's a good idea to run the unit tests in a random order so that you can confirm that they run
- indepedently of one another:
+- It's a good idea to run the unit tests in a random order so that you can
+ confirm that they run indepedently of one another:
```shell
cd ../../bin
./skunit_tests --order rand
```
- By default, this will only show reports for failed tests. To show reports for successful tests as
- well, use the option `--success`. More command line options can be found in
+ By default, this will only show reports for failed tests. To show reports for
+ successful tests as well, use the option `--success`. More command line
+ options can be found in
[Catch2's documentation](https://github.com/catchorg/Catch2/blob/v2.x/docs/command-line.md).
- If you want to run a specific test, or group of tests, you can do so:
@@ -279,22 +289,23 @@ and `RESOURCE_PATH` variables will be defined.
./skunit_tests
```
- The `test spec` can be a test name or tags and supports wildcards. For example, `*string*` would
- run all of the tests with "string" in the name.
+ The `test spec` can be a test name or tags and supports wildcards. For
+ example, `*string*` would run all of the tests with "string" in the name.
-- It's a good idea to run the unit tests in a random order so that you can confirm that they run
- indepedently of one another:
+- It's a good idea to run the unit tests in a random order so that you can
+ confirm that they run indepedently of one another:
```shell
cd ../../bin
./skunit_tests.exe --order rand
```
- By default, this will only show reports for failed tests. To show reports for successful tests as
- well, use the option `--success`. More command line options can be found in
+ By default, this will only show reports for failed tests. To show reports for
+ successful tests as well, use the option `--success`. More command line
+ options can be found in
[Catch2's documentation](https://github.com/catchorg/Catch2/blob/v2.x/docs/command-line.md).
- If you want to run a specific test, or group of tests, you can do so:
@@ -303,8 +314,8 @@ and `RESOURCE_PATH` variables will be defined.
./skunit_tests.exe
```
- The `test spec` can be a test name or tags and supports wildcards. For example, `*string*` would
- run all of the tests with "string" in the name.
+ The `test spec` can be a test name or tags and supports wildcards. For
+ example, `*string*` would run all of the tests with "string" in the name.
@@ -315,8 +326,8 @@ You can run tests from the Testing tab in VS Code

- Running all tests:
- Click Run Tests. Each test will be run and the status of each can be seen in the test list after a
- test runs.
+ Click Run Tests. Each test will be run and the status of each can be seen in
+ the test list after a test runs.

- Running a specific test:
Click Run Test next to any test on the test list to run it
diff --git a/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/02-onboarding-guide.mdx b/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/02-onboarding-guide.mdx
index 8e2419a15..9f37f986d 100644
--- a/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/02-onboarding-guide.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/02-onboarding-guide.mdx
@@ -6,9 +6,10 @@ sidebar:
:::note
-This guide is the original onboarding guide that was created for this project, but this information
-has been moved to the SplashKit-wide [GitHub Guide](/products/splashkit/03-github-guide), which may
-be more up-to-date.
+This guide is the original onboarding guide that was created for this project,
+but this information has been moved to the SplashKit-wide
+[GitHub Guide](/products/splashkit/03-github-guide), which may be more
+up-to-date.
This guide is here as well in case it is easier to use.
@@ -16,54 +17,63 @@ This guide is here as well in case it is easier to use.
## Introduction
-This guide will cover all the steps required to get contributing to the splashkit-core repository.
-Feel free to skip steps which you have already completed or are familiar with.
+This guide will cover all the steps required to get contributing to the
+splashkit-core repository. Feel free to skip steps which you have already
+completed or are familiar with.
## Installing WSL
-WSL is a built-in Linux distribution virtual machine for Windows. splashkit-core will be installed
-to the Linux distribution. The official SplashKit installation instructions can be found here:
-[Windows (WSL) Installation Overview](https://splashkit.io/installation/windows-wsl). This guide has
-been tested with the default Ubuntu distribution, but others may also work.
+WSL is a built-in Linux distribution virtual machine for Windows. splashkit-core
+will be installed to the Linux distribution. The official SplashKit installation
+instructions can be found here:
+[Windows (WSL) Installation Overview](https://splashkit.io/installation/windows-wsl).
+This guide has been tested with the default Ubuntu distribution, but others may
+also work.
### Installing Visual Studio Code
-Once WSL has been installed to Windows, Visual Studio Code needs to be installed to WSL. The
-SplashKit documentation explains the process:
+Once WSL has been installed to Windows, Visual Studio Code needs to be installed
+to WSL. The SplashKit documentation explains the process:
[Install Visual Studio Code](https://splashkit.io/installation/windows-wsl/step-3)
### Installing Git
-Now Git must be installed to WSL. Follow the Microsoft installation instructions:
-[Install Git](https://learn.microsoft.com/en-us/windows/wsl/tutorials/wsl-git). GitHub's Git Cheat
-Sheet is useful for both installation and usage of Git:
+Now Git must be installed to WSL. Follow the Microsoft installation
+instructions:
+[Install Git](https://learn.microsoft.com/en-us/windows/wsl/tutorials/wsl-git).
+GitHub's Git Cheat Sheet is useful for both installation and usage of Git:
[GitHub Git Cheat Sheet](https://education.github.com/git-cheat-sheet-education.pdf)
### Installing Windows Terminal (optional)
-Windows Terminal is an updated Command Prompt with many useful features. It is not mandatory to
-install, however it is recommended due to its ease of use. More information about Windows Terminal
-can be found here: [Windows Terminal](https://learn.microsoft.com/en-us/windows/terminal/).It can be
-installed through the Microsoft Store.
+Windows Terminal is an updated Command Prompt with many useful features. It is
+not mandatory to install, however it is recommended due to its ease of use. More
+information about Windows Terminal can be found here:
+[Windows Terminal](https://learn.microsoft.com/en-us/windows/terminal/).It can
+be installed through the Microsoft Store.
-By default, new tabs will open as a Command Prompt, with WSL terminals being accessible with the
-down-arrow. Since WSL will be used so frequently, there is the option to change the default tab to
-WSL. Open the settings by clicking the down arrow and selecting ‘Settings’. In the ‘Startup’ tab,
-‘Default profile’ allows you to change the default tab type to WSL.
+By default, new tabs will open as a Command Prompt, with WSL terminals being
+accessible with the down-arrow. Since WSL will be used so frequently, there is
+the option to change the default tab to WSL. Open the settings by clicking the
+down arrow and selecting ‘Settings’. In the ‘Startup’ tab, ‘Default profile’
+allows you to change the default tab type to WSL.
## Setting Up splashkit-core
-Now that WSL is fully configured, it is time to install the splashkit-core repository.
+Now that WSL is fully configured, it is time to install the splashkit-core
+repository.
### Forking splashkit-core Repository
-A fork is a copy of a repository which allows for independent development without interfering with
-the primary repository itself. To create a personal fork of splashkit-core, navigate to
-[GitHub splashkit-core](https://github.com/splashkit/splashkit-core) and click ‘Fork’. On the next
-page, keep the default name and click ‘Create fork’.
+A fork is a copy of a repository which allows for independent development
+without interfering with the primary repository itself. To create a personal
+fork of splashkit-core, navigate to
+[GitHub splashkit-core](https://github.com/splashkit/splashkit-core) and click
+‘Fork’. On the next page, keep the default name and click ‘Create fork’.
-The following guide for the SplashKit Website has useful information regarding forks and branches
-which can be adjusted for splashkit-core by substituting repository names:
+The following guide for the SplashKit Website has useful information regarding
+forks and branches which can be adjusted for splashkit-core by substituting
+repository names:
[Get Your Environment Set Up](/products/splashkit/02-setting-up).
### Cloning splashkit-core Repository
@@ -74,16 +84,17 @@ Open a WSL terminal and change directory to your home with:
cd
```
-Note that this guide clones the repository to the home directory, but feel free to move its
-location. Now initiate the clone process of your fork with:
+Note that this guide clones the repository to the home directory, but feel free
+to move its location. Now initiate the clone process of your fork with:
```shell
git clone --recursive -j2 https://github.com/{user name}/splashkit-core.git
```
-splashkit-core contains multiple submodules (separate repositories which splashkit-core depends
-upon). The `--recursive` argument ensures that the submodules are also downloaded when calling
-clone. Wait for the download to complete before continuing to the next step.
+splashkit-core contains multiple submodules (separate repositories which
+splashkit-core depends upon). The `--recursive` argument ensures that the
+submodules are also downloaded when calling clone. Wait for the download to
+complete before continuing to the next step.
## Contributing to splashkit-core
@@ -91,8 +102,9 @@ It is now time to start fixing bugs and adding functionality to splashkit-core.
### Creating Branch
-When modifying the repository, changes should be logically grouped together onto separate branches.
-To create a branch, open a WSL terminal and navigate to the `splashkit-core` folder with:
+When modifying the repository, changes should be logically grouped together onto
+separate branches. To create a branch, open a WSL terminal and navigate to the
+`splashkit-core` folder with:
```shell
cd splashkit-core
@@ -120,10 +132,11 @@ Now that a new branch is created and active, development can begin.
### Building the Test Programs
-You cannot create new programs with splashkit-core as you do when using the traditional SplashKit
-library. Instead, two programs are generated which can be configured to test its functionality:
-`sktest` and `skunit_tests`. They are built with CMake using a preconfigured `CMakeLists.txt` file.
-Open a WSL terminal and enter:
+You cannot create new programs with splashkit-core as you do when using the
+traditional SplashKit library. Instead, two programs are generated which can be
+configured to test its functionality: `sktest` and `skunit_tests`. They are
+built with CMake using a preconfigured `CMakeLists.txt` file. Open a WSL
+terminal and enter:
```shell
cd
@@ -158,19 +171,23 @@ Or for skunit_tests:
### Making Changes
-`sktest` is built with the .cpp files from `~/splashkit-core/coreskd/src/test/`. To add your own
-tests, modify one or more of the files such as `test_animation.cpp`.
+`sktest` is built with the .cpp files from `~/splashkit-core/coreskd/src/test/`.
+To add your own tests, modify one or more of the files such as
+`test_animation.cpp`.
-`skunit_tests` is built with the .cpp files from `~/splashkit-core/coreskd/src/test/unit_tests/`.
-When it runs, all unit tests from all files in this folder are executed. Additional files can be
-added to this folder if necessary. If adding a new file, copy the structure from one of the existing
-unit test files. Critically, `#include "catch.hpp"` must be present in the file for it to be
-compiled into `skunit_tests`. Beyond that, the hierarchy of, `TEST_CASE > SECTION > ASSERTION`
-should be followed to improve readability and tracing of errors.
+`skunit_tests` is built with the .cpp files from
+`~/splashkit-core/coreskd/src/test/unit_tests/`. When it runs, all unit tests
+from all files in this folder are executed. Additional files can be added to
+this folder if necessary. If adding a new file, copy the structure from one of
+the existing unit test files. Critically, `#include "catch.hpp"` must be present
+in the file for it to be compiled into `skunit_tests`. Beyond that, the
+hierarchy of, `TEST_CASE > SECTION > ASSERTION` should be followed to improve
+readability and tracing of errors.
### Testing Changes
-If a change is made to the code, the test programs need to be rebuilt. In a WSL terminal enter:
+If a change is made to the code, the test programs need to be rebuilt. In a WSL
+terminal enter:
```shell
cd
@@ -178,7 +195,8 @@ cd splashkit-core/projects/cmake
make
```
-If any files were created or deleted, the CMake files need to be regenerated. In that case use:
+If any files were created or deleted, the CMake files need to be regenerated. In
+that case use:
```shell
cd
@@ -189,51 +207,55 @@ make
### Documenting Changes
-Local changes can be tested by building and running the test programs. However, once changes are to
-be submitted for review, they need to be staged, committed and pushed. It is good practice to
-perform multiple smaller commits with meaningful descriptions rather than a single monolithic
-commit. In addition, pushing commits to GitHub provides a layer of backup in case of local machine
+Local changes can be tested by building and running the test programs. However,
+once changes are to be submitted for review, they need to be staged, committed
+and pushed. It is good practice to perform multiple smaller commits with
+meaningful descriptions rather than a single monolithic commit. In addition,
+pushing commits to GitHub provides a layer of backup in case of local machine
failure.
### Creating a Pull Request
-Once you have completed work on a particular branch, a pull request (PR) can be made. At this point
-there are now three relevant splashkit-core repositories at play: splashkit-core itself,
-thoth-tech’s fork, and your personal fork. During trimester, PRs should be made against the
-thoth-tech fork. The PR template provides a framework for how to structure the associated PR
-documentation.
+Once you have completed work on a particular branch, a pull request (PR) can be
+made. At this point there are now three relevant splashkit-core repositories at
+play: splashkit-core itself, thoth-tech’s fork, and your personal fork. During
+trimester, PRs should be made against the thoth-tech fork. The PR template
+provides a framework for how to structure the associated PR documentation.
-The following guide details how to create PRs for the SplashKit Website. The same instructions can
-be used for splashkit-core by simply changing the repository name:
+The following guide details how to create PRs for the SplashKit Website. The
+same instructions can be used for splashkit-core by simply changing the
+repository name:
[How to Create a Pull Request](/products/splashkit/04-pull-request).
### Responding to Peer Reviews
-If changes are requested during a PR review, pushing further commits to the same branch will
-automatically be added to the PR.
+If changes are requested during a PR review, pushing further commits to the same
+branch will automatically be added to the PR.
### Performing Peer Reviews
-A critical component to SplashKit development is the process of reviewing your peers' PRs and
-providing constructive feedback. This process has been detailed in the following guide:
+A critical component to SplashKit development is the process of reviewing your
+peers' PRs and providing constructive feedback. This process has been detailed
+in the following guide:
[A Guide to Doing Peer Reviews](/products/splashkit/06-peer-review)
### Planner Board
-The planner board is used to coordinate tasks while they are being completed and reviewed. The
-following guide details the procedure and etiquette which is expected while using the planner board:
+The planner board is used to coordinate tasks while they are being completed and
+reviewed. The following guide details the procedure and etiquette which is
+expected while using the planner board:
[Planner Board Ettiquete](/products/splashkit/07-planner-board)
## Troubleshooting
-Solutions for common issues can be found below. Be sure to also check the following page for help
-troubleshooting:
+Solutions for common issues can be found below. Be sure to also check the
+following page for help troubleshooting:
[Guide to resolving Common Issues](/products/splashkit/03-github-guide/#troubleshooting)
### Empty Translator folder
-If the translator folder is empty, it may be due to an issue with the submodules. In an WSL
-terminal, enter the following:
+If the translator folder is empty, it may be due to an issue with the
+submodules. In an WSL terminal, enter the following:
```shell
cd
diff --git a/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/03-improvement-suggestion-add-oop-collision-resolution.mdx b/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/03-improvement-suggestion-add-oop-collision-resolution.mdx
index ebb084d6f..fb69e93b0 100644
--- a/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/03-improvement-suggestion-add-oop-collision-resolution.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/03-improvement-suggestion-add-oop-collision-resolution.mdx
@@ -1,8 +1,8 @@
---
title: "Suggested Improvement: Adding OOP Principles to Shape Types"
description:
- This page is about adding OOP principles to SplashKit's Shape types to simplify the collision
- classification and resolution functions
+ This page is about adding OOP principles to SplashKit's Shape types to
+ simplify the collision classification and resolution functions
sidebar:
label: "Suggested Improvement for Shape Types"
---
@@ -10,24 +10,28 @@ sidebar:
## Description
This document outlines the refactoring of a previous PR:
-[Pull Request 83](https://github.com/thoth-tech/splashkit-core/pull/83). The goal was to eliminate
-the need for void and function pointers by having sprites, rectangles, circles, triangles and quads
-inherit from the abstract class shape. Unfortunately, I was unable to get the program to compile. I
-am fairly certain that it is due to circular dependencies.
-
-The major issue is that `_sprite_data` is abstracted entirely from students by being declared in
-`sprite.cpp`. However, moving `_sprite_data` from `sprites.cpp` to `sprites.h` leads to circular
-dependencies between `collision.h`, `types.h`, `sprites.h` and `backend_types.h`.
-
-I am sure that the compilation errors could be resolved, however it would be a difficult process. My
-plan would be to split larger header files such as `types.h` into smaller files for each struct.
-There would be then be a `rectangle.h`, `circle.h` and so on.
-
-Similarly, `backend_types.h` would be split into many smaller header files for each enum. This would
-allow for fine-grain control of includes which will make diagnosing and rectifying the circular
-dependencies easier.
-
-By moving to an OOP approach, the code can be more understood, debugged and maintained.
+[Pull Request 83](https://github.com/thoth-tech/splashkit-core/pull/83). The
+goal was to eliminate the need for void and function pointers by having sprites,
+rectangles, circles, triangles and quads inherit from the abstract class shape.
+Unfortunately, I was unable to get the program to compile. I am fairly certain
+that it is due to circular dependencies.
+
+The major issue is that `_sprite_data` is abstracted entirely from students by
+being declared in `sprite.cpp`. However, moving `_sprite_data` from
+`sprites.cpp` to `sprites.h` leads to circular dependencies between
+`collision.h`, `types.h`, `sprites.h` and `backend_types.h`.
+
+I am sure that the compilation errors could be resolved, however it would be a
+difficult process. My plan would be to split larger header files such as
+`types.h` into smaller files for each struct. There would be then be a
+`rectangle.h`, `circle.h` and so on.
+
+Similarly, `backend_types.h` would be split into many smaller header files for
+each enum. This would allow for fine-grain control of includes which will make
+diagnosing and rectifying the circular dependencies easier.
+
+By moving to an OOP approach, the code can be more understood, debugged and
+maintained.
## Code
diff --git a/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/04-nuget-package-guide.mdx b/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/04-nuget-package-guide.mdx
index cbdf435d6..2c7263480 100644
--- a/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/04-nuget-package-guide.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/04-nuget-package-guide.mdx
@@ -1,27 +1,30 @@
---
title: "NuGet Package Guide"
-description: "Guide to building and testing the SplashKit NuGet package for .NET"
+description:
+ "Guide to building and testing the SplashKit NuGet package for .NET"
---
import { FileTree, Steps } from "@astrojs/starlight/components";
## Introduction
-NuGet is the package manager for .NET. NuGet packages are essentially zip files containing reusable
-code, libraries, and metadata. Packages can be distributed privately or publicly via NuGet
-repositories. For more information, visit [nuget.org](https://www.nuget.org/).
+NuGet is the package manager for .NET. NuGet packages are essentially zip files
+containing reusable code, libraries, and metadata. Packages can be distributed
+privately or publicly via NuGet repositories. For more information, visit
+[nuget.org](https://www.nuget.org/).
The SplashKit NuGet package has two primary purposes:
-- Provides C# bindings for SplashKit, translated from C++ using SplashKit Translator. The bindings
- allow users to create SplashKit projects with C#.
-- Provides SplashKit native libraries for Windows (x64) and macOS. These libraries allow Windows and
- Mac users to create and run SplashKit projects in C# without installing the libraries separately
- via SplashKit Manager (SKM).
+- Provides C# bindings for SplashKit, translated from C++ using SplashKit
+ Translator. The bindings allow users to create SplashKit projects with C#.
+- Provides SplashKit native libraries for Windows (x64) and macOS. These
+ libraries allow Windows and Mac users to create and run SplashKit projects in
+ C# without installing the libraries separately via SplashKit Manager (SKM).
:::note
-**Prerequisite:** Ensure [.NET SDK](https://dotnet.microsoft.com/en-us/download) is installed.
+**Prerequisite:** Ensure [.NET SDK](https://dotnet.microsoft.com/en-us/download)
+is installed.
:::
@@ -31,27 +34,29 @@ The SplashKit NuGet package has two primary purposes:
1. #### Download the SplashKit native libraries
- Download the latest stable libraries from [SKM](https://github.com/splashkit/skm.git). Libraries
- should be placed in `/tools/scripts/nuget-pkg/Libraries/win64` and
- `/tools/scripts/nuget-pkg/Libraries/macos` respectively. This step can be automated via the use
- of the bash script found at `/tools/scripts/nuget-pkg/download-libraries.sh`.
+ Download the latest stable libraries from
+ [SKM](https://github.com/splashkit/skm.git). Libraries should be placed in
+ `/tools/scripts/nuget-pkg/Libraries/win64` and
+ `/tools/scripts/nuget-pkg/Libraries/macos` respectively. This step can be
+ automated via the use of the bash script found at
+ `/tools/scripts/nuget-pkg/download-libraries.sh`.
:::note
- SKM does not include pre-compiled libraries for Linux. It instead uses a script to install
- dependencies and builds based on the detected distro.
+ SKM does not include pre-compiled libraries for Linux. It instead uses a
+ script to install dependencies and builds based on the detected distro.
:::
2. #### Open the NuGet package directory
- Navigate to `/tools/scripts/nuget-pkg`. This directory contains configuration settings for the
- package, along with the associated icon and description.
+ Navigate to `/tools/scripts/nuget-pkg`. This directory contains configuration
+ settings for the package, along with the associated icon and description.
3. #### Build the package
- Run one of the below commands, ensuring to replace _YOUR_VERSION_ with the relevant version
- number (e.g. 1.3.0):
+ Run one of the below commands, ensuring to replace _YOUR_VERSION_ with the
+ relevant version number (e.g. 1.3.0):
**For debug**
@@ -65,28 +70,29 @@ The SplashKit NuGet package has two primary purposes:
dotnet build --configuration Release -p:version=YOUR_VERSION
```
- This will build the package and output to `tools/scripts/nuget-pkg/bin/` as detailed in
- [Exploring the Output](#exploring-the-output).
+ This will build the package and output to `tools/scripts/nuget-pkg/bin/` as
+ detailed in [Exploring the Output](#exploring-the-output).
4. #### Check the output directories
- Ensure both `Release` and `Debug` directories exist in `tools/scripts/nuget-pkg/bin/`. If either
- is missing, create the empty directory as needed. This is to ensure the test programs run without
- errors.
+ Ensure both `Release` and `Debug` directories exist in
+ `tools/scripts/nuget-pkg/bin/`. If either is missing, create the empty
+ directory as needed. This is to ensure the test programs run without errors.
### Exploring the Output
-In the `tools/scripts/nuget-pkg/bin/` directory, there will be a directory corresponding to the
-built package - either `Debug` or `Release`. This directory contains the NuGet package itself
-(`SplashKit.X.X.X.nupkg`), along with separate directories for each targeted .NET version. These
-directories are for referencing/testing, and can be ignored - the package itself does not depend
+In the `tools/scripts/nuget-pkg/bin/` directory, there will be a directory
+corresponding to the built package - either `Debug` or `Release`. This directory
+contains the NuGet package itself (`SplashKit.X.X.X.nupkg`), along with separate
+directories for each targeted .NET version. These directories are for
+referencing/testing, and can be ignored - the package itself does not depend
upon them.
## Testing
-A suite of test programs is provided in the form of a C# solution, with a separate project/directory
-for each test.
+A suite of test programs is provided in the form of a C# solution, with a
+separate project/directory for each test.
@@ -102,9 +108,9 @@ for each test.
-The provided tests are translations of the SplashKit core integration tests. The tests can be run
-individually from their respective folders, but are best launched using the test runner located in
-the `Main` directory.
+The provided tests are translations of the SplashKit core integration tests. The
+tests can be run individually from their respective folders, but are best
+launched using the test runner located in the `Main` directory.
:::note
@@ -119,18 +125,19 @@ For more information on SplashKit core integration tests, see
1. #### Check local package sources
- `NuGet.config` sets the local NuGet package sources. If one of these directories does not exist,
- either create it, or comment out the corresponding entry in `NuGet.config`.
+ `NuGet.config` sets the local NuGet package sources. If one of these
+ directories does not exist, either create it, or comment out the
+ corresponding entry in `NuGet.config`.
2. #### Set target package version
- `Directory.Packages.props` specifies the target NuGet package version. Update this file to match
- the version being tested.
+ `Directory.Packages.props` specifies the target NuGet package version. Update
+ this file to match the version being tested.
3. #### Check target .NET versions
- `Directory.Build.props` specifies .NET versions for multi-targeting. These should match the .NET
- versions being targeted by the package build.
+ `Directory.Build.props` specifies .NET versions for multi-targeting. These
+ should match the .NET versions being targeted by the package build.
:::caution
@@ -140,8 +147,9 @@ For more information on SplashKit core integration tests, see
4. #### Run the test runner
- Open the `Main` directory. Run the following command, replacing _TARGET_FRAMEWORK_ with the
- framework to be tested, e.g. For .NET 9, use `dotnet run -f net9.0`.
+ Open the `Main` directory. Run the following command, replacing
+ _TARGET_FRAMEWORK_ with the framework to be tested, e.g. For .NET 9, use
+ `dotnet run -f net9.0`.
```shell
dotnet run -f
@@ -149,9 +157,9 @@ For more information on SplashKit core integration tests, see
:::note
- To aid testing and ensure consistency, the test runner lists the target and runtime .NET
- framework, along with the NuGet package version. These should be checked on every run to ensure
- alignment with the intended targets:
+ To aid testing and ensure consistency, the test runner lists the target and
+ runtime .NET framework, along with the NuGet package version. These should be
+ checked on every run to ensure alignment with the intended targets:

@@ -163,24 +171,26 @@ For more information on SplashKit core integration tests, see
:::caution
- **Known issue:** The included graphics test currently produces inconsistent results in a
- MSYS2/Windows environment. This behaviour is consistent with SplashKit installed via SKM and is
- being investigated.
+ **Known issue:** The included graphics test currently produces inconsistent
+ results in a MSYS2/Windows environment. This behaviour is consistent with
+ SplashKit installed via SKM and is being investigated.
:::
6. #### Re-run as necessary
- To ensure the package is fully functional, the tests should be re-run to cover each combination
- of framework and architecture. Every test should produce the expected output.
+ To ensure the package is fully functional, the tests should be re-run to
+ cover each combination of framework and architecture. Every test should
+ produce the expected output.
### Updating or Adding Tests
-As noted in [Running the Provided Tests](#running-the-provided-tests), the _NuGetTests_ solution
-specifies the NuGet version and target frameworks in `Directory.Packages.props` and
-`Directory.Build.props` respectively. These should be updated at a solution level, rather than
-separate configurations for each test project. This ensures ease of use and reduces room for error.
+As noted in [Running the Provided Tests](#running-the-provided-tests), the
+_NuGetTests_ solution specifies the NuGet version and target frameworks in
+`Directory.Packages.props` and `Directory.Build.props` respectively. These
+should be updated at a solution level, rather than separate configurations for
+each test project. This ensures ease of use and reduces room for error.
### Adding New Tests to the Solution
@@ -198,13 +208,13 @@ separate configurations for each test project. This ensures ease of use and redu
dotnet new console
```
-3. Edit the project's `.csproj` file to remove any `` tags, since this would
- override the solution's multi-targeting.
+3. Edit the project's `.csproj` file to remove any `` tags,
+ since this would override the solution's multi-targeting.
4. Write your new test in `Program.cs`
-5. If loading resources (e.g. images, fonts, etc.) use the following line to utilise existing
- resources from the main SplashKit test suite.
+5. If loading resources (e.g. images, fonts, etc.) use the following line to
+ utilise existing resources from the main SplashKit test suite.
```cs
SetResourcesPath(GlobalSettings.ResourcePath);
@@ -214,20 +224,21 @@ separate configurations for each test project. This ensures ease of use and redu
:::note
-The test runner checks the solution directory for any `.csproj` files on input. Therefore, new tests
-are detected by the runner automatically, and can even be added without closing the runner.
+The test runner checks the solution directory for any `.csproj` files on input.
+Therefore, new tests are detected by the runner automatically, and can even be
+added without closing the runner.
:::
### Updating Existing Tests
-Existing tests can be updated by editing `Program.cs` in the corresponding project's directory.
-Since the test runner (`Main`) builds each project before running, there is no need to rebuild the
-runner upon updating a test. The runner is designed to be left running during updates, to speed up
-development.
+Existing tests can be updated by editing `Program.cs` in the corresponding
+project's directory. Since the test runner (`Main`) builds each project before
+running, there is no need to rebuild the runner upon updating a test. The runner
+is designed to be left running during updates, to speed up development.
-If loading resources (e.g. images, fonts, etc.) use the following line to utilise existing resources
-from the main SplashKit test suite:
+If loading resources (e.g. images, fonts, etc.) use the following line to
+utilise existing resources from the main SplashKit test suite:
```cs
SetResourcesPath(GlobalSettings.ResourcePath);
@@ -235,9 +246,9 @@ SetResourcesPath(GlobalSettings.ResourcePath);
### Creating Tests Outside of the Provided Solution
-Normally, an end user would use `dotnet add package splashkit` to reference the SplashKit package.
-However, this will default to the latest stable SplashKit version, skipping the locally built one.
-Instead, do the following:
+Normally, an end user would use `dotnet add package splashkit` to reference the
+SplashKit package. However, this will default to the latest stable SplashKit
+version, skipping the locally built one. Instead, do the following:
@@ -247,11 +258,12 @@ Instead, do the following:
dotnet new console
```
-2. Create a `NuGet.config` file in the project (or solution) directory, specifying the path to the
- target package. An example can be found in `/tools/scripts/test`.
+2. Create a `NuGet.config` file in the project (or solution) directory,
+ specifying the path to the target package. An example can be found in
+ `/tools/scripts/test`.
-3. Add the following to the `.csproj`, replacing _TARGET_VERSION_ with the targeted NuGet package
- version:
+3. Add the following to the `.csproj`, replacing _TARGET_VERSION_ with the
+ targeted NuGet package version:
```xml
diff --git a/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/index.mdx b/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/index.mdx
index d7f935ff4..195e17094 100644
--- a/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/index.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/index.mdx
@@ -5,20 +5,28 @@ sidebar:
order: 10
---
-import { Aside, Card, LinkCard, CardGrid, Icon } from "@astrojs/starlight/components";
+import {
+ Aside,
+ Card,
+ LinkCard,
+ CardGrid,
+ Icon,
+} from "@astrojs/starlight/components";
## Contributing to splashkit-core
-This guide will cover all the steps required to get contributing to the splashkit-core repository.
-Feel free to skip steps which you have already completed or are familiar with.
+This guide will cover all the steps required to get contributing to the
+splashkit-core repository. Feel free to skip steps which you have already
+completed or are familiar with.
-If you haven't already, ensure you have setup your work environment by following the
-[Setting up your environment guide](/products/splashkit/02-setting-up)
+If you haven't already, ensure you have setup your work environment by following
+the [Setting up your environment guide](/products/splashkit/02-setting-up)
### Creating Branch
-When modifying the repository, changes should be logically grouped together onto separate branches.
-To create a branch, open a WSL terminal and navigate to the `splashkit-core` folder with:
+When modifying the repository, changes should be logically grouped together onto
+separate branches. To create a branch, open a WSL terminal and navigate to the
+`splashkit-core` folder with:
```shell
cd splashkit-core
@@ -46,10 +54,11 @@ Now that a new branch is created and active, development can begin.
### Building the Test Programs
-You cannot create new programs with splashkit-core as you do when using the traditional SplashKit
-library. Instead, two programs are generated which can be configured to test its functionality:
-`sktest` and `skunit_tests`. They are built with CMake using a preconfigured `CMakeLists.txt` file.
-Open a WSL terminal and enter:
+You cannot create new programs with splashkit-core as you do when using the
+traditional SplashKit library. Instead, two programs are generated which can be
+configured to test its functionality: `sktest` and `skunit_tests`. They are
+built with CMake using a preconfigured `CMakeLists.txt` file. Open a WSL
+terminal and enter:
```shell
cd
@@ -84,19 +93,23 @@ Or for skunit_tests:
### Making Changes
-`sktest` is built with the .cpp files from `~/splashkit-core/coreskd/src/test/`. To add your own
-tests, modify one or more of the files such as `test_animation.cpp`.
+`sktest` is built with the .cpp files from `~/splashkit-core/coreskd/src/test/`.
+To add your own tests, modify one or more of the files such as
+`test_animation.cpp`.
-`skunit_tests` is built with the .cpp files from `~/splashkit-core/coreskd/src/test/unit_tests/`.
-When it runs, all unit tests from all files in this folder are executed. Additional files can be
-added to this folder if necessary. If adding a new file, copy the structure from one of the existing
-unit test files. Critically, `#include "catch.hpp"` must be present in the file for it to be
-compiled into `skunit_tests`. Beyond that, the hierarchy of, `TEST_CASE > SECTION > ASSERTION`
-should be followed to improve readability and tracing of errors.
+`skunit_tests` is built with the .cpp files from
+`~/splashkit-core/coreskd/src/test/unit_tests/`. When it runs, all unit tests
+from all files in this folder are executed. Additional files can be added to
+this folder if necessary. If adding a new file, copy the structure from one of
+the existing unit test files. Critically, `#include "catch.hpp"` must be present
+in the file for it to be compiled into `skunit_tests`. Beyond that, the
+hierarchy of, `TEST_CASE > SECTION > ASSERTION` should be followed to improve
+readability and tracing of errors.
### Testing Changes
-If a change is made to the code, the test programs need to be rebuilt. In a WSL terminal enter:
+If a change is made to the code, the test programs need to be rebuilt. In a WSL
+terminal enter:
```shell
cd
@@ -104,7 +117,8 @@ cd splashkit-core/projects/cmake
make
```
-If any files were created or deleted, the CMake files need to be regenerated. In that case use:
+If any files were created or deleted, the CMake files need to be regenerated. In
+that case use:
```shell
cd
@@ -115,43 +129,47 @@ make
### Documenting Changes
-Local changes can be tested by building and running the test programs. However, once changes are to
-be submitted for review, they need to be staged, committed and pushed. It is good practice to
-perform multiple smaller commits with meaningful descriptions rather than a single monolithic
-commit. In addition, pushing commits to GitHub provides a layer of backup in case of local machine
+Local changes can be tested by building and running the test programs. However,
+once changes are to be submitted for review, they need to be staged, committed
+and pushed. It is good practice to perform multiple smaller commits with
+meaningful descriptions rather than a single monolithic commit. In addition,
+pushing commits to GitHub provides a layer of backup in case of local machine
failure.
### Creating a Pull Request
-Once you have completed work on a particular branch, a pull request (PR) can be made. At this point
-there are now three relevant splashkit-core repositories at play: splashkit-core itself,
-thoth-tech’s fork, and your personal fork. During trimester, PRs should be made against the
-thoth-tech fork. The PR template provides a framework for how to structure the associated PR
-documentation.
+Once you have completed work on a particular branch, a pull request (PR) can be
+made. At this point there are now three relevant splashkit-core repositories at
+play: splashkit-core itself, thoth-tech’s fork, and your personal fork. During
+trimester, PRs should be made against the thoth-tech fork. The PR template
+provides a framework for how to structure the associated PR documentation.
-The following guide details how to create PRs for the SplashKit Website. The same instructions can
-be used for splashkit-core by simply changing the repository name:
+The following guide details how to create PRs for the SplashKit Website. The
+same instructions can be used for splashkit-core by simply changing the
+repository name:
[How to Create a Pull Request](/products/splashkit/04-pull-request).
### Responding to Peer Reviews
-If changes are requested during a PR review, pushing further commits to the same branch will
-automatically be added to the PR.
+If changes are requested during a PR review, pushing further commits to the same
+branch will automatically be added to the PR.
### Performing Peer Reviews
-A critical component to SplashKit development is the process of reviewing your peers' PRs and
-providing constructive feedback. This process has been detailed in the following guide:
+A critical component to SplashKit development is the process of reviewing your
+peers' PRs and providing constructive feedback. This process has been detailed
+in the following guide:
[A Guide to Doing Peer Reviews](/products/splashkit/06-peer-review)
### Planner Board
-The planner board is used to coordinate tasks while they are being completed and reviewed. The
-following guide details the procedure and etiquette which is expected while using the planner board:
+The planner board is used to coordinate tasks while they are being completed and
+reviewed. The following guide details the procedure and etiquette which is
+expected while using the planner board:
[Planner Board Ettiquete](/products/splashkit/07-planner-board)
## Troubleshooting
-Solutions for common issues can be found below. Be sure to also check the following page for help
-troubleshooting:
+Solutions for common issues can be found below. Be sure to also check the
+following page for help troubleshooting:
[Guide to resolving Common Issues](/products/splashkit/03-github-guide/#troubleshooting)
diff --git a/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/peer-review-guide.mdx b/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/peer-review-guide.mdx
index b7e341395..76e06fbd6 100644
--- a/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/peer-review-guide.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/SplashKit Expansion/peer-review-guide.mdx
@@ -12,33 +12,37 @@ import { Aside } from "@astrojs/starlight/components";
-In SplashKit, peer reviews are an essential part of maintaining high-quality code. The Peer-Review
-Checklist provided below is required for every pull request and ensures that all contributions meet
-a consistent standard across the project. This checklist covers essential aspects like code quality,
+In SplashKit, peer reviews are an essential part of maintaining high-quality
+code. The Peer-Review Checklist provided below is required for every pull
+request and ensures that all contributions meet a consistent standard across the
+project. This checklist covers essential aspects like code quality,
functionality, and testing.
-However, we recognize that every feature or task is different, and it’s difficult to capture all
-potential review points in a single checklist. That’s why we’ve also included a set of Peer-Review
-Prompts. These prompts are not mandatory but serve as a resource to guide the peer-review
-discussion. Since peer reviews should always be collaborative, these prompts help ensure that the
-review process is conversational and thorough, encouraging reviewers to think critically and explore
-areas that may not be immediately obvious.
+However, we recognize that every feature or task is different, and it’s
+difficult to capture all potential review points in a single checklist. That’s
+why we’ve also included a set of Peer-Review Prompts. These prompts are not
+mandatory but serve as a resource to guide the peer-review discussion. Since
+peer reviews should always be collaborative, these prompts help ensure that the
+review process is conversational and thorough, encouraging reviewers to think
+critically and explore areas that may not be immediately obvious.
-Remember, the goal of peer reviews is not only to verify the quality of the code but also to foster
-a collaborative environment where we improve together.
+Remember, the goal of peer reviews is not only to verify the quality of the code
+but also to foster a collaborative environment where we improve together.
@@ -55,16 +59,18 @@ has been given.
## Code Quality
-- [ ] Repository: Is this Pull Request is made to the correct repository? (Thoth-Tech NOT SplashKit)
-- [ ] Readability: Is the code easy to read and follow? If not are there comments to help understand
- the code?
-- [ ] Maintainability: Can this code be easily maintained or extended in the future?
+- [ ] Repository: Is this Pull Request is made to the correct repository?
+ (Thoth-Tech NOT SplashKit)
+- [ ] Readability: Is the code easy to read and follow? If not are there
+ comments to help understand the code?
+- [ ] Maintainability: Can this code be easily maintained or extended in the
+ future?
## Functionality
- [ ] Correctness: Does the code meet the requirements of the task?
-- [ ] Impact on Existing Functionality: Has the impact on existing functionality been considered and
- tested?
+- [ ] Impact on Existing Functionality: Has the impact on existing functionality
+ been considered and tested?
## Testing
@@ -73,55 +79,63 @@ has been given.
## Documentation
-- [ ] Documentation: Are both inline and applicable external documentation updated and clear?
+- [ ] Documentation: Are both inline and applicable external documentation
+ updated and clear?
## Pull Request Details
- [ ] PR Description: Is the problem being solved clearly described?
-- [ ] Checklist Completion: Have all relevant checklist items been reviewed and completed?
+- [ ] Checklist Completion: Have all relevant checklist items been reviewed and
+ completed?
```
### Splashkit Review Prompts
-- **Type of Change**: Does this Pull Request correctly identify the type of change (bug fix, new
- feature, breaking change, or documentation update)? Is it aligned with the stated issue or task?
-
-- **Code Readability**: Is the code structure clean and easy to follow? Could it benefit from
- clearer variable names, additional comments, or better organization? Would this code be
- understandable for a new developer joining the project?
-
-- **Maintainability**: How maintainable is the code? Is it modular and easy to extend in the future?
- Does it avoid creating technical debt? Is the codebase as simple as possible while still
- accomplishing the task?
-
-- **Code Simplicity**: Are there any overly complex or redundant sections in the code? Could they be
- refactored for better simplicity or clarity? Does the code follow established design patterns and
- best practices?
-
-- **Edge Cases**: Does the implementation consider potential edge cases? What could go wrong with
- this code in unusual or unexpected scenarios? Are there any cases that haven’t been fully
- addressed?
-
-- **Test Thoroughness**: Are all key scenarios (including edge cases and failure paths) covered by
- tests? Could additional tests help ensure the reliability of the code? Has the code been tested
- across different environments (e.g., multiple browsers or platforms)?
-
-- **Backward Compatibility**: Does this change break any existing functionality? If so, has backward
- compatibility been handled or documented appropriately? Are there any warnings or notes in the
- documentation regarding compatibility?
-
-- **Performance Considerations**: Could this code have a negative impact on performance? Have any
- performance concerns been documented and tested? Could the code be optimized for better efficiency
- without sacrificing readability?
-
-- **Security Concerns**: Could this change introduce security vulnerabilities, especially in terms
- of input validation or sensitive data handling? Have security best practices been followed? Does
- this code ensure proper user data handling?
-
-- **Dependencies**: Are the new dependencies truly necessary? Could they create conflicts or issues
- down the line, particularly during upgrades or with other libraries in the project? Is there a
- simpler way to achieve the same functionality without adding new dependencies?
-
-- **Documentation**: Is the documentation clear and complete for both internal developers and
- external users? Could a new developer understand how to use or modify this feature from the
- documentation provided? Does it cover any API or external interface changes?
+- **Type of Change**: Does this Pull Request correctly identify the type of
+ change (bug fix, new feature, breaking change, or documentation update)? Is it
+ aligned with the stated issue or task?
+
+- **Code Readability**: Is the code structure clean and easy to follow? Could it
+ benefit from clearer variable names, additional comments, or better
+ organization? Would this code be understandable for a new developer joining
+ the project?
+
+- **Maintainability**: How maintainable is the code? Is it modular and easy to
+ extend in the future? Does it avoid creating technical debt? Is the codebase
+ as simple as possible while still accomplishing the task?
+
+- **Code Simplicity**: Are there any overly complex or redundant sections in the
+ code? Could they be refactored for better simplicity or clarity? Does the code
+ follow established design patterns and best practices?
+
+- **Edge Cases**: Does the implementation consider potential edge cases? What
+ could go wrong with this code in unusual or unexpected scenarios? Are there
+ any cases that haven’t been fully addressed?
+
+- **Test Thoroughness**: Are all key scenarios (including edge cases and failure
+ paths) covered by tests? Could additional tests help ensure the reliability of
+ the code? Has the code been tested across different environments (e.g.,
+ multiple browsers or platforms)?
+
+- **Backward Compatibility**: Does this change break any existing functionality?
+ If so, has backward compatibility been handled or documented appropriately?
+ Are there any warnings or notes in the documentation regarding compatibility?
+
+- **Performance Considerations**: Could this code have a negative impact on
+ performance? Have any performance concerns been documented and tested? Could
+ the code be optimized for better efficiency without sacrificing readability?
+
+- **Security Concerns**: Could this change introduce security vulnerabilities,
+ especially in terms of input validation or sensitive data handling? Have
+ security best practices been followed? Does this code ensure proper user data
+ handling?
+
+- **Dependencies**: Are the new dependencies truly necessary? Could they create
+ conflicts or issues down the line, particularly during upgrades or with other
+ libraries in the project? Is there a simpler way to achieve the same
+ functionality without adding new dependencies?
+
+- **Documentation**: Is the documentation clear and complete for both internal
+ developers and external users? Could a new developer understand how to use or
+ modify this feature from the documentation provided? Does it cover any API or
+ external interface changes?
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Classes/execution-environment.md b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Classes/execution-environment.md
index f7fda1845..3f7b3f153 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Classes/execution-environment.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Classes/execution-environment.md
@@ -1,41 +1,47 @@
---
title: ExecutionEnvironment - Code Documentation
-description: An explanation of what ExecutionEnvironment is, its methods, members, and events.
+description:
+ An explanation of what ExecutionEnvironment is, its methods, members, and
+ events.
---
[_executionEnvironment.js_](https://github.com/thoth-tech/SplashkitOnline/blob/main/Browser_IDE/executionEnvironment.js)
-ExecutionEnvironment is a class designed to abstract out running the user's code, and also handle
-the environment itself (such as resetting variables, preloading files, etc). It contains functions
-to 'compile' user code, run the main program, reset itself, and create directories/files inside the
-environment.
+ExecutionEnvironment is a class designed to abstract out running the user's
+code, and also handle the environment itself (such as resetting variables,
+preloading files, etc). It contains functions to 'compile' user code, run the
+main program, reset itself, and create directories/files inside the environment.
-The actual implementation can be found inside `executionEnvironment.js`. Upon creation, it creates
-an iFrame (which can be thought of as a page inside the page) - and this is where all the user's
-code will be run.
+The actual implementation can be found inside `executionEnvironment.js`. Upon
+creation, it creates an iFrame (which can be thought of as a page inside the
+page) - and this is where all the user's code will be run.
## Why create an iFrame?
-The iFrame it creates is sandboxed so that it cannot access anything inside the main page. This is
-important, since while we can likely trust code the user writes themselves, we cannot trust code
-they may receive from other people. If we ran the code the user writes directly inside the main
-page, it could access and manipulate the IDE itself, along with accessing cookies and other things
-it shouldn't have access to. By running it inside the iFrame, we can be sure it can't access
-anything it shouldn't.
+The iFrame it creates is sandboxed so that it cannot access anything inside the
+main page. This is important, since while we can likely trust code the user
+writes themselves, we cannot trust code they may receive from other people. If
+we ran the code the user writes directly inside the main page, it could access
+and manipulate the IDE itself, along with accessing cookies and other things it
+shouldn't have access to. By running it inside the iFrame, we can be sure it
+can't access anything it shouldn't.
-It also makes it clear which files are part of the project (since those exist outside the iFrame),
-and which parts are only transient, such as logs (that only exist inside the iFrame and are
-destroyed on reloads). It means user code can not permanently overwrite resources.
+It also makes it clear which files are part of the project (since those exist
+outside the iFrame), and which parts are only transient, such as logs (that only
+exist inside the iFrame and are destroyed on reloads). It means user code can
+not permanently overwrite resources.
-Additionally, it gives us a way to completely reset the environment the code is running in, as we
-can destroy and recreate the iFrame without having to reload the main page itself.
+Additionally, it gives us a way to completely reset the environment the code is
+running in, as we can destroy and recreate the iFrame without having to reload
+the main page itself.
-To communicate with the iFrame, we can only send and receive messages, which also limits the number
-of potential escape routes from the iFrame.
+To communicate with the iFrame, we can only send and receive messages, which
+also limits the number of potential escape routes from the iFrame.
## Members
-- `hasRunOnce` - has the program been run yet? Is reset with `resetEnvironment()`
+- `hasRunOnce` - has the program been run yet? Is reset with
+ `resetEnvironment()`
- `executionStatus` - current status of the program, can be:
- `ExecutionStatus.Unstarted`
- `ExecutionStatus.Running`
@@ -43,42 +49,47 @@ of potential escape routes from the iFrame.
## Methods
-- `constructor(container)` - takes a container element to load the iFrame inside.
+- `constructor(container)` - takes a container element to load the iFrame
+ inside.
### Initializing user's code
-- `runCodeBlock(block, source)` - takes a code block (which has the block name `block`, and the
- source code `source`, syntax checks it, and if it passes, sends the code to the iFrame via a
- message.
-- `runCodeBlocks(blocks)` - takes an array of dictionaries with the keys {name, code}, and calls
- `runCodeBlock` for each one.
+- `runCodeBlock(block, source)` - takes a code block (which has the block name
+ `block`, and the source code `source`, syntax checks it, and if it passes,
+ sends the code to the iFrame via a message.
+- `runCodeBlocks(blocks)` - takes an array of dictionaries with the keys {name,
+ code}, and calls `runCodeBlock` for each one.
### Running user's code
-- `runProgram()` - sends a message to the iFrame to run the user's `main` (if it exists).
-- `pauseProgram()` - sends a message to pause the user's program - returns a `promise`, that
- resolves once the program pauses, or fails after 2 seconds.
-- `continueProgram()` - sends a message to continue the user's program (if it has been paused)
-- `stopProgram()` - sends a message to stop the user's program completely - returns a `promise`,
- that resolves once the program stops, or fails after 2 seconds.
+- `runProgram()` - sends a message to the iFrame to run the user's `main` (if it
+ exists).
+- `pauseProgram()` - sends a message to pause the user's program - returns a
+ `promise`, that resolves once the program pauses, or fails after 2 seconds.
+- `continueProgram()` - sends a message to continue the user's program (if it
+ has been paused)
+- `stopProgram()` - sends a message to stop the user's program completely -
+ returns a `promise`, that resolves once the program stops, or fails after 2
+ seconds.
### Handling the environment
-- `resetEnvironment()` - completely resets the environment, by destroying and recreating the iFrame.
- All files inside the environment will also be lost.
-- `cleanEnvironment()` - Does a 'best-efforts' attempt to tidy the environment, such as removing
- user created global variables. Much faster than `resetEnvironment()`, and does not reset the file
- system.
+- `resetEnvironment()` - completely resets the environment, by destroying and
+ recreating the iFrame. All files inside the environment will also be lost.
+- `cleanEnvironment()` - Does a 'best-efforts' attempt to tidy the environment,
+ such as removing user created global variables. Much faster than
+ `resetEnvironment()`, and does not reset the file system.
### Filesystem
- `mkdir(path)` - sends a message to create a directory at `path`
-- `writeFile(path, data)` - sends a message to write `data` to a file `path`, creating it if it does
- not exist
+- `writeFile(path, data)` - sends a message to write `data` to a file `path`,
+ creating it if it does not exist
## Events
-The events can be listened to by attaching with `addEventListener(event, callback)`
+The events can be listened to by attaching with
+`addEventListener(event, callback)`
- `initialized` - the ExecutionEnvironment is setup and ready to execute code.
- `error` - an error has occurred in user code. Members:
@@ -94,7 +105,8 @@ The events can be listened to by attaching with `addEventListener(event, callbac
- `newPath` - the path it was moved to
- `onMakeDirectory` - A directory has been made. Members:
- `path` - the path to the new directory
-- `onDeletePath` - A file or directory has been deleted. Members: - `path` - the path to the
- file/directory
-- `onOpenFile` - A file has been opened, possibly for reading or writing. Members:
+- `onDeletePath` - A file or directory has been deleted. Members: - `path` - the
+ path to the file/directory
+- `onOpenFile` - A file has been opened, possibly for reading or writing.
+ Members:
- `path` - the path to the file
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Classes/idb-stored-project.md b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Classes/idb-stored-project.md
index 062ebf8f4..78f5ff877 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Classes/idb-stored-project.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Classes/idb-stored-project.md
@@ -1,61 +1,70 @@
---
title: IDBStoredProject - Code Documentation
-description: An explanation of what IDBStoredProject is, its methods, members, and events.
+description:
+ An explanation of what IDBStoredProject is, its methods, members, and events.
---
[_IDBStoredProject.js_](https://github.com/thoth-tech/SplashkitOnline/blob/main/Browser_IDE/IDBStoredProject.js)
-IDBStoredProject is a class that handles saving/loading the user's project within the browser
-itself. It uses
-[IndexedDB](https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Using_IndexedDB) storage,
-which allows it to store large amounts of data in a simplified database structure.
+IDBStoredProject is a class that handles saving/loading the user's project
+within the browser itself. It uses
+[IndexedDB](https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Using_IndexedDB)
+storage, which allows it to store large amounts of data in a simplified database
+structure.
-It stores a single project inside a single database, creating a new one for each project. It has
-functions to read and write files to a virtual filesystem saved inside the database (for storing
-user code, and uploaded resources like sprites and sounds). It also has an area for config, and
-keeps track of its lastWriteTime inside there.
+It stores a single project inside a single database, creating a new one for each
+project. It has functions to read and write files to a virtual filesystem saved
+inside the database (for storing user code, and uploaded resources like sprites
+and sounds). It also has an area for config, and keeps track of its
+lastWriteTime inside there.
## Database layout
There are two tables:
-- `project` - contains information about the project, such as the last write time. Simple key-value,
- with the key's name being 'category'.
-- `files` - stores all the user's files and directories. Each entry contains the following:
- - `nodeID` - a numerical identifier for the node (file/directory), automatically increments.
+- `project` - contains information about the project, such as the last write
+ time. Simple key-value, with the key's name being 'category'.
+- `files` - stores all the user's files and directories. Each entry contains the
+ following:
+ - `nodeID` - a numerical identifier for the node (file/directory),
+ automatically increments.
- `name` - name of the file/directory
- `type` - either `"FILE"` or `"DIR"` - file or directory
- - `data` - the file's contents - a binary blob of data. Or `null` if it's a directory.
- - `parent` - the `nodeID` of the parent of the file/directory (what directory is it inside). -1
- means it is inside the root directory.
+ - `data` - the file's contents - a binary blob of data. Or `null` if it's a
+ directory.
+ - `parent` - the `nodeID` of the parent of the file/directory (what directory
+ is it inside). -1 means it is inside the root directory.
## Members
-- `initializer` - a function that can be called to initialize the database - performs the equivalent
- of `skm new`
-- `projectName` - the name of the project (and therefore database) it is currently attached to.
- `null` if detached.
-- `lastKnownWriteTime` - the last time the project was written to within this tab.
+- `initializer` - a function that can be called to initialize the database -
+ performs the equivalent of `skm new`
+- `projectName` - the name of the project (and therefore database) it is
+ currently attached to. `null` if detached.
+- `lastKnownWriteTime` - the last time the project was written to within this
+ tab.
## Methods
-- `constructor(initializer)` - takes an initializer function, used when initializing a project's
- database for the first time.
-- `attachToProject(storeName)` - attaches it to a project with the name `storeName`. Initializes the
- database, and emits an `attached` event.
-- `detachFromProject()` - detaches itself from the project, resets its internal state and emits a
- `detached` event.
-- `deleteProject(storeName)` - deletes the project named `storeName`, and returns a promise which
- resolves once the database is truly deleted.
-- `checkForWriteConflicts()` - checks the `lastKnownWriteTime` against the actual `lastWriteTime`
- inside the database - if they conflict in a way that suggests another tab has written to the
- database, throws a `timeConflict` event.
-- `access(func)` - a bit of a special function. This function is the only entry point to
- reading/writing to the IDBStoredProject. It takes a function, which it will call, passing in a new
- object (internally a `__IDBStoredProjectRW`), which has many more methods for reading/writing.
- This is done, so that the opening/closing of the database can be wrapped around the user function,
- without them having to handle it manually (and potentially leave open connections causing issues
- later on). Here's an example of usage:
+- `constructor(initializer)` - takes an initializer function, used when
+ initializing a project's database for the first time.
+- `attachToProject(storeName)` - attaches it to a project with the name
+ `storeName`. Initializes the database, and emits an `attached` event.
+- `detachFromProject()` - detaches itself from the project, resets its internal
+ state and emits a `detached` event.
+- `deleteProject(storeName)` - deletes the project named `storeName`, and
+ returns a promise which resolves once the database is truly deleted.
+- `checkForWriteConflicts()` - checks the `lastKnownWriteTime` against the
+ actual `lastWriteTime` inside the database - if they conflict in a way that
+ suggests another tab has written to the database, throws a `timeConflict`
+ event.
+- `access(func)` - a bit of a special function. This function is the only entry
+ point to reading/writing to the IDBStoredProject. It takes a function, which
+ it will call, passing in a new object (internally a `__IDBStoredProjectRW`),
+ which has many more methods for reading/writing. This is done, so that the
+ opening/closing of the database can be wrapped around the user function,
+ without them having to handle it manually (and potentially leave open
+ connections causing issues later on). Here's an example of usage:
```javascript
let storedProject = new StoredProject(...)
@@ -67,25 +76,28 @@ let storedTime = await storedProject.access((project)=>project.getLastWriteTime(
let storedTime = await storedProject.access(function(project){ return project.getLastWriteTime()});
```
-**The following functions are ones accessible from inside the callback to `access` only**
+**The following functions are ones accessible from inside the callback to
+`access` only**
- `getLastWriteTime()` - get the last write time.
-- `updateLastWriteTime(time = null)` - set the last write time - defaults to the current time
- (stored in unix time)
-- `mkdir(path)` - make a directory at path, does nothing if it already exists. Emits
- `onMakeDirectory` event.
-- `writeFile(path, data)` - overwrites the data inside the file at `path` with `data` - creates the
- file if it doesn't exist. Emits `onOpenFile` event. Also emits `onWriteToFile` event.
-- `rename(oldPath, newPath)` - moves a file/directory to a new path and/or name. Emits `onMovePath`
- event.
-- `readFile(path)` - reads a file at `path` and returns the data inside. Returns `null` if the file
- doesn't exist.
-- `getFileTree()` - returns a complete tree of the file system, in a structure digestible by the
- `TreeView`.
+- `updateLastWriteTime(time = null)` - set the last write time - defaults to the
+ current time (stored in unix time)
+- `mkdir(path)` - make a directory at path, does nothing if it already exists.
+ Emits `onMakeDirectory` event.
+- `writeFile(path, data)` - overwrites the data inside the file at `path` with
+ `data` - creates the file if it doesn't exist. Emits `onOpenFile` event. Also
+ emits `onWriteToFile` event.
+- `rename(oldPath, newPath)` - moves a file/directory to a new path and/or name.
+ Emits `onMovePath` event.
+- `readFile(path)` - reads a file at `path` and returns the data inside. Returns
+ `null` if the file doesn't exist.
+- `getFileTree()` - returns a complete tree of the file system, in a structure
+ digestible by the `TreeView`.
## Events
-The events can be listened to by attaching with `addEventListener(event, callback)`
+The events can be listened to by attaching with
+`addEventListener(event, callback)`
- `attached` - Is attached and can be used.
- `detached` - Has been detached.
@@ -96,5 +108,6 @@ The events can be listened to by attaching with `addEventListener(event, callbac
- `path` - the path to the new directory
- `onDeletePath` - A file or directory has been deleted. Members:
- `path` - the path to the file/directory
-- `onOpenFile` - A file has been opened, possibly for reading or writing. Members:
+- `onOpenFile` - A file has been opened, possibly for reading or writing.
+ Members:
- `path` - the path to the file
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Classes/tree-view.md b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Classes/tree-view.md
index 3bfb66918..f3bf62f4f 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Classes/tree-view.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Classes/tree-view.md
@@ -1,43 +1,51 @@
---
title: TreeView - Code Documentation
-description: An explanation of what TreeView is, its methods, members, and events.
+description:
+ An explanation of what TreeView is, its methods, members, and events.
---
[_treeview.js_](https://github.com/thoth-tech/SplashkitOnline/blob/main/Browser_IDE/treeview.js)
-TreeView is a class used for displaying and updating a tree view, designed specifically around
-file/directory manipulation. It allows viewing multiple filesystems at once in an overlapping
-fashion (important since we have the files in the user's project that will be saved/loaded, and also
-the live files inside the ExecutionEnvironment, which may be different). It allows files/folders to
-be dragged around and organized, and folders to have a button on the side for uploading new files.
+TreeView is a class used for displaying and updating a tree view, designed
+specifically around file/directory manipulation. It allows viewing multiple
+filesystems at once in an overlapping fashion (important since we have the files
+in the user's project that will be saved/loaded, and also the live files inside
+the ExecutionEnvironment, which may be different). It allows files/folders to be
+dragged around and organized, and folders to have a button on the side for
+uploading new files.
-The way it is intended to be used, is to make it listen to events from the target filesystems (such
-as file moves/deletes), and update itself accordingly. When it is interacted with by the user, it
-will emit its own events - these events should be listened to, and the target filesystem updated
-accordingly. It should then look like this :
+The way it is intended to be used, is to make it listen to events from the
+target filesystems (such as file moves/deletes), and update itself accordingly.
+When it is interacted with by the user, it will emit its own events - these
+events should be listened to, and the target filesystem updated accordingly. It
+should then look like this :
1. A file is created in the target filesystem and an event is emitted
-2. The TreeView reacts to this event and creates a node in its tree with the same name.
-3. The user now drags that node to inside another node (directory), and the TreeView emits an event.
- Note that it does _not_ change itself here. The node inside the tree has not actually moved yet.
-4. A function is called back from this event, that then tells the target filesystem to move the
- file.
+2. The TreeView reacts to this event and creates a node in its tree with the
+ same name.
+3. The user now drags that node to inside another node (directory), and the
+ TreeView emits an event. Note that it does _not_ change itself here. The node
+ inside the tree has not actually moved yet.
+4. A function is called back from this event, that then tells the target
+ filesystem to move the file.
5. The target filesystem moves the file, and an event is emitted.
-6. The TreeView reacts to this event, and moves the node to inside the directory.
+6. The TreeView reacts to this event, and moves the node to inside the
+ directory.
-See how the TreeView never updates itself - it relies on an event coming _back_ from the target
-filesystem. This means that if the target filesystem fails to do the operation for whatever reason,
-the TreeView also remains in the same state, meaning the two remain synchronized effectively.
+See how the TreeView never updates itself - it relies on an event coming _back_
+from the target filesystem. This means that if the target filesystem fails to do
+the operation for whatever reason, the TreeView also remains in the same state,
+meaning the two remain synchronized effectively.
See example usage of it inside `fileview.js`
-([here](https://github.com/thoth-tech/SplashkitOnline/blob/main/Browser_IDE/fileview.js)), where it
-is attached to both the `IDBStoredProject` filesystem, and also the filesystem inside the
-`ExecutionEnvironment`.
+([here](https://github.com/thoth-tech/SplashkitOnline/blob/main/Browser_IDE/fileview.js)),
+where it is attached to both the `IDBStoredProject` filesystem, and also the
+filesystem inside the `ExecutionEnvironment`.
### Limitations
-Currently there is no way to delete files/folders, or rename files/folders in the interface itself.
-This shouldn't be hard to add, however.
+Currently there is no way to delete files/folders, or rename files/folders in
+the interface itself. This shouldn't be hard to add, however.
## Members
@@ -45,35 +53,40 @@ None publicly available.
## Methods
-- `constructor(container, FSes)` - takes a container to place the TreeView's elements into, and a
- list of FSes, which are the filesystems it will support. An example list looks like this
- `{"persistent":"node-persistent", "transient":"node-transient"}`, key-value pairs where the key is
- the filesystems name, and the value is a css style to apply to nodes inside this filesystem.
-- `moveNode(oldPath, newPath, index = -1, FS)` - moves a node to a new path and/or name. Allows one
- to set the index the node will appear at, and also which filesystem(s) (a list) the move occurred
- in.
+- `constructor(container, FSes)` - takes a container to place the TreeView's
+ elements into, and a list of FSes, which are the filesystems it will support.
+ An example list looks like this
+ `{"persistent":"node-persistent", "transient":"node-transient"}`, key-value
+ pairs where the key is the filesystems name, and the value is a css style to
+ apply to nodes inside this filesystem.
+- `moveNode(oldPath, newPath, index = -1, FS)` - moves a node to a new path
+ and/or name. Allows one to set the index the node will appear at, and also
+ which filesystem(s) (a list) the move occurred in.
- `deleteNode(path, FS)` - deletes a node from a set of filesystem(s) (a list)
-- `addDirectory(path, FS)` - make a directory at path, does nothing if it already exists. Allows one
- to set which filesystem(s) (a list) the directory add occurred in.
-- `addFile(path, data)` - make a file at path, does nothing if it already exists. Allows one to set
- which filesystem(s) (a list) the file was added in.
+- `addDirectory(path, FS)` - make a directory at path, does nothing if it
+ already exists. Allows one to set which filesystem(s) (a list) the directory
+ add occurred in.
+- `addFile(path, data)` - make a file at path, does nothing if it already
+ exists. Allows one to set which filesystem(s) (a list) the file was added in.
- `reset(path)` - Deletes all nodes.
-- `populatefileView(files, FS)` - Populates the tree with a list of files in a particular structure
- (the same one `IDBStoredProject.getFileTree()` returns). Allows one to set which filesystem(s) (a
- list) the directory add occurred in.
+- `populatefileView(files, FS)` - Populates the tree with a list of files in a
+ particular structure (the same one `IDBStoredProject.getFileTree()` returns).
+ Allows one to set which filesystem(s) (a list) the directory add occurred in.
## Events
-The events can be listened to by attaching with `addEventListener(event, callback)`
+The events can be listened to by attaching with
+`addEventListener(event, callback)`
- `nodeMoveRequest` - A file or directory has been moved. Members:
- `treeView` - the TreeView object
- `oldPath` - the original path
- `newPath` - the path it was moved to
- `FS` - the filesystem(s) the change occurred in.
- - `accept` - a function that can be called to announce that the change was successful -
- **currently unused**.
-- `folderUploadRequest` - The 'add file' button was clicked on a directory. Members:
+ - `accept` - a function that can be called to announce that the change was
+ successful - **currently unused**.
+- `folderUploadRequest` - The 'add file' button was clicked on a directory.
+ Members:
- `treeView` - the TreeView object
- `path` - path to the directory
- `FS` - the filesystem(s) the directory exists in.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Other/folder-structure-overview.md b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Other/folder-structure-overview.md
index ad3454e69..388677daf 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Other/folder-structure-overview.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Other/folder-structure-overview.md
@@ -1,28 +1,29 @@
---
title: Overview of SplashKit Online's Folders and Files
description:
- An overview of what all of SplashKit Online's folders and files contain, and how they relate.
+ An overview of what all of SplashKit Online's folders and files contain, and
+ how they relate.
---
## Introduction
-This document is a brief overview of how SplashKit Online's folders are structured, with short
-descriptions on what each file contains. If you're looking for a particular piece of code, maybe
-this will help!
+This document is a brief overview of how SplashKit Online's folders are
+structured, with short descriptions on what each file contains. If you're
+looking for a particular piece of code, maybe this will help!
## Structure
### Browser_IDE
-This folder contains all the files relevant to the in-browser IDE. This includes front-end and
-back-end Javascript, html, css, libraries, etc.
+This folder contains all the files relevant to the in-browser IDE. This includes
+front-end and back-end Javascript, html, css, libraries, etc.
#### Folders
`node_modules` - All the installed node libraries.
-`splashkit` - Where the SplashKit WebAssembly library build goes! Compiled from the SplashKitWasm
-folder.
+`splashkit` - Where the SplashKit WebAssembly library build goes! Compiled from
+the SplashKitWasm folder.
#### Files
@@ -32,97 +33,107 @@ The following files are used when running as a node project
`server.js` - serves the main index, and sets up routing for the libraries.
-`package.json` - The list of packages/libraries and versions that the project uses.
+`package.json` - The list of packages/libraries and versions that the project
+uses.
##### Main Editor
The following files are used inside the main page (`index.html`)
-`index.html` - The editor's html itself - contains a simple layout and some placeholder elements for
-the file view, ExecutionEnvironment, and code editors to load into.
+`index.html` - The editor's html itself - contains a simple layout and some
+placeholder elements for the file view, ExecutionEnvironment, and code editors
+to load into.
-`editorMain.js` - The main file that handles setting up the IDE. It loads the code editors, the
-project, shows/updates the run/stop buttons, and also performs saving, loading, and file mirroring.
-It also creates the ExecutionEnvironment, and IDBStoredProject on startup.
+`editorMain.js` - The main file that handles setting up the IDE. It loads the
+code editors, the project, shows/updates the run/stop buttons, and also performs
+saving, loading, and file mirroring. It also creates the ExecutionEnvironment,
+and IDBStoredProject on startup.
-`IDBStoredProject.js` - Holds the IDBStoredProject class, which handles saving/loading the user's
-project to/from internal browser storage. See
+`IDBStoredProject.js` - Holds the IDBStoredProject class, which handles
+saving/loading the user's project to/from internal browser storage. See
[IDBStoredProject](/products/splashkit/documentation/splashkit-online/code-documentation/classes/idb-stored-project)
for internal documentation.
-`executionEnvironment.js` - Holds the ExecutionEnvironment class, which handles 'compiling' and
-running the user's code in a safe way. See
+`executionEnvironment.js` - Holds the ExecutionEnvironment class, which handles
+'compiling' and running the user's code in a safe way. See
[ExecutionEnvironment](/products/splashkit/documentation/splashkit-online/code-documentation/classes/execution-environment)
for internal documentation.
-`treeview.js` - Holds the TreeView class, used to display a tree view targeted at showing a
-filesystem. See
+`treeview.js` - Holds the TreeView class, used to display a tree view targeted
+at showing a filesystem. See
[TreeView](/products/splashkit/documentation/splashkit-online/code-documentation/classes/tree-view)
for internal documentation.
-`fileview.js` - Creates an instance of the TreeView class, hooks it into the IDBStoredProject and
-ExecutionEnvironment's filesystems, and places it on the main page.
+`fileview.js` - Creates an instance of the TreeView class, hooks it into the
+IDBStoredProject and ExecutionEnvironment's filesystems, and places it on the
+main page.
`modal.js` - A utility file with a function for creating modals.
-`projectInitializer.js` - Contains demo code (as text) and the function used to initialize the
-default project - does something similar to `skm new`.
+`projectInitializer.js` - Contains demo code (as text) and the function used to
+initialize the default project - does something similar to `skm new`.
-`stylesheet.css` - Contains the styles for the editor, primarily related to the TreeView but also
-the code editors and other areas.
+`stylesheet.css` - Contains the styles for the editor, primarily related to the
+TreeView but also the code editors and other areas.
-`splashkit-javascript-hint.js` - Contains code to handle autocompletion in the code editors,
-including loading `splashkit_autocomplete.json`
+`splashkit-javascript-hint.js` - Contains code to handle autocompletion in the
+code editors, including loading `splashkit_autocomplete.json`
-`splashkit_autocomplete.json` - Contains data on all the SplashKit functions, classes and enums.
+`splashkit_autocomplete.json` - Contains data on all the SplashKit functions,
+classes and enums.
##### Internal Execution Environment
-The following files are used inside the isolated iFrame (inside the Execution Environment)
-(`executionEnvironment.html`)
+The following files are used inside the isolated iFrame (inside the Execution
+Environment) (`executionEnvironment.html`)
-`executionEnvironment.html` - The Execution Environment's main page, contains a simple layout with
-placeholders for where the canvas and terminal should go.
+`executionEnvironment.html` - The Execution Environment's main page, contains a
+simple layout with placeholders for where the canvas and terminal should go.
-`executionEnvironment_Internal.js` - Internal code for the ExecutionEnvironment. Handles receiving
-messages from the main page's ExecutionEnvironment object, 'compiling', and running the user's code.
+`executionEnvironment_Internal.js` - Internal code for the ExecutionEnvironment.
+Handles receiving messages from the main page's ExecutionEnvironment object,
+'compiling', and running the user's code.
-`executionEnvironment_CodeProcessor.js` - Handles processing the user's code, transforming and
-modifying it so that it can be properly paused, restarted, etc.
+`executionEnvironment_CodeProcessor.js` - Handles processing the user's code,
+transforming and modifying it so that it can be properly paused, restarted, etc.
`loadsplashkit.js` - used to load the SplashKit Wasm library.
-`fsevents.js` - creates an eventTarget that can be used to listen to filesystem events inside the
-virtual filesystem (that the SplashKit Wasm library can access).
+`fsevents.js` - creates an eventTarget that can be used to listen to filesystem
+events inside the virtual filesystem (that the SplashKit Wasm library can
+access).
`stylesheet.css` - Same as in [Main Editor](#main-editor).
### SplashKitWasm
-This folder contains the files related to _building_ SplashKit so that it can run inside the
-browser - the output from this build is then copied into Browser_IDE, where the library is used!
+This folder contains the files related to _building_ SplashKit so that it can
+run inside the browser - the output from this build is then copied into
+Browser_IDE, where the library is used!
`cmake` - The cmake project - used to build the SplashKit Wasm library!
-`external` - Contains the `splashkit-core` submodule, which contains all of SplashKit's code.
+`external` - Contains the `splashkit-core` submodule, which contains all of
+SplashKit's code.
-`stubs` - A couple of stubs (files with empty functions) used to help compile SplashKit despite
-certain functionality missing.
+`stubs` - A couple of stubs (files with empty functions) used to help compile
+SplashKit despite certain functionality missing.
-`tools` - Tools used during compilation, particularly in relation to generating C++ to Javascript
-bindings.
+`tools` - Tools used during compilation, particularly in relation to generating
+C++ to Javascript bindings.
`generated` - Files generated during the build process.
-`out` - Contains the built library! This is also copied straight into `Browser_IDE/splashkit` during
-the build.
+`out` - Contains the built library! This is also copied straight into
+`Browser_IDE/splashkit` during the build.
### DemoProjects
-This folder contains a set of demo projects (just zip files) that can be loaded into the IDE for
-testing, demonstration, or learning purposes.
+This folder contains a set of demo projects (just zip files) that can be loaded
+into the IDE for testing, demonstration, or learning purposes.
### .archive
-This folder contains an archive of previous trimester's work, primarily around some sort of login
-system. currently unneeded but perhaps can be repurposed at some point.
+This folder contains an archive of previous trimester's work, primarily around
+some sort of login system. currently unneeded but perhaps can be repurposed at
+some point.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Processes/cpp-js-binding-generation-overview.md b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Processes/cpp-js-binding-generation-overview.md
index cc7ba475d..5791fb82b 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Processes/cpp-js-binding-generation-overview.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Processes/cpp-js-binding-generation-overview.md
@@ -1,25 +1,30 @@
---
title: C++ <-> JavaScript Binding Generation Overview
description:
- A detailed explanation as to the how and why of SplashKit Online's new binding generator.
+ A detailed explanation as to the how and why of SplashKit Online's new binding
+ generator.
---
## Introduction
-The only data type that can currently be passed to and from WebAssembly functions are integers. The
-SplashKit API requires us not only to be able to pass numbers, but also vectors, structs, vectors of
-structs, function pointers, and so on. Because of this, there is a necessity to generate _bindings_,
-which are able to translate and transfer the data we need across the C++/JavaScript boundary.
+The only data type that can currently be passed to and from WebAssembly
+functions are integers. The SplashKit API requires us not only to be able to
+pass numbers, but also vectors, structs, vectors of structs, function pointers,
+and so on. Because of this, there is a necessity to generate _bindings_, which
+are able to translate and transfer the data we need across the C++/JavaScript
+boundary.
-We originally used the WebIDL Binder tool to accomplish this, but it had several major flaws:
+We originally used the WebIDL Binder tool to accomplish this, but it had several
+major flaws:
- It did not support arrays of strings, nor structs
- It did not support arrays of arrays (e.g `matrix_2d` could not be represented)
-- It would allocate memory for structs on the C++ side using malloc and only provide a pointer on
- the JavaScript side. As JavaScript has no concept of a destructor, this completely ruined value
- semantics and required manual freeing of even basic SplashKit types, such as colour.
-- It would return struct types via a pointer to a singleton for that function. Therefore the
- following code:
+- It would allocate memory for structs on the C++ side using malloc and only
+ provide a pointer on the JavaScript side. As JavaScript has no concept of a
+ destructor, this completely ruined value semantics and required manual freeing
+ of even basic SplashKit types, such as colour.
+- It would return struct types via a pointer to a singleton for that function.
+ Therefore the following code:
```javascript
let colorA = rgba_color(1, 0, 0, 0); // Red; colorA points to the 'rgba_color return-singleton'
let colorB = rgba_color(0, 1, 0, 0); // Green; overwrites the 'rgba_color return-singleton'
@@ -27,35 +32,39 @@ We originally used the WebIDL Binder tool to accomplish this, but it had several
fill_rectangle(colorA * /Should be red*/, 300, 300, 200, 200); // this actually ends up green!
```
would draw a green rectangle, rather than a red one.
-- It had no support for function overloads, making the JavaScript API more cumbersome to use and
- different from usual C++ samples.
+- It had no support for function overloads, making the JavaScript API more
+ cumbersome to use and different from usual C++ samples.
Embind was also looked at, but also suffered from fundamental issues (see
-[here](https://github.com/emscripten-core/emscripten/issues/6492) for one major issue). Thus it was
-decided that a new solution was required.
+[here](https://github.com/emscripten-core/emscripten/issues/6492) for one major
+issue). Thus it was decided that a new solution was required.
## New Solution
-The new solution was written from scratch in Python. The fundamental way it works is as follows:
+The new solution was written from scratch in Python. The fundamental way it
+works is as follows:
- Structs are represented as proper JavaScript objects.
-- When a function is called that needs to pass a struct to the C++ side, space is allocated on the
- WebAssembly stack, and the data for that object is copied from the JavaScript object. Then, a
- pointer to that location on the stack is passed into the C++ function, which operates as normal.
-- Similarly, if the C++ returns a struct, space is preallocated on the stack, and the C++ function
- writes its return result into that space.
-- Vectors are instead allocated on the heap and data copied into that space. The function is then
- passed/passes back two parameters - a pointer to that location on the heap, and a count. On the
- C++ side, a vector is constructed and copies the items from/to that block of memory.
-
-Despite the extra copying compared to the previous solution, because of several other optimizations,
-it has been demonstrated to be 2x more performant than WebIDL Binder, while maintaining proper value
-semantics and vastly enhanced support of SplashKit's API - in fact, there is not a single function
-that cannot be called now.
-
-We can see an example of how this works with a simple function. Let's look at how
-`string matrix_to_string(const matrix_2d &matrix)` is handled: The following is the C++ wrapper,
-with some additional comments added:
+- When a function is called that needs to pass a struct to the C++ side, space
+ is allocated on the WebAssembly stack, and the data for that object is copied
+ from the JavaScript object. Then, a pointer to that location on the stack is
+ passed into the C++ function, which operates as normal.
+- Similarly, if the C++ returns a struct, space is preallocated on the stack,
+ and the C++ function writes its return result into that space.
+- Vectors are instead allocated on the heap and data copied into that space. The
+ function is then passed/passes back two parameters - a pointer to that
+ location on the heap, and a count. On the C++ side, a vector is constructed
+ and copies the items from/to that block of memory.
+
+Despite the extra copying compared to the previous solution, because of several
+other optimizations, it has been demonstrated to be 2x more performant than
+WebIDL Binder, while maintaining proper value semantics and vastly enhanced
+support of SplashKit's API - in fact, there is not a single function that cannot
+be called now.
+
+We can see an example of how this works with a simple function. Let's look at
+how `string matrix_to_string(const matrix_2d &matrix)` is handled: The following
+is the C++ wrapper, with some additional comments added:
```c++
// CPP_matrix_to_string is the wrapper function. As can be seen, it directly takes a reference to a matrix, and returns a char* pointer rather than a std::string
@@ -71,7 +80,8 @@ with some additional comments added:
}
```
-The JavaScript side has much more work to do - again with added comments as explanation:
+The JavaScript side has much more work to do - again with added comments as
+explanation:
```javascript
// This is the JavaScript function
@@ -79,12 +89,14 @@ function matrix_to_string(matrix) {
// First we verify the type of the object passed in, and throw a useful error message if its incorrect.
if (typeof matrix !== "object" || matrix.constructor != matrix_2d)
throw new SplashKitArgumentError(
- "Incorrect call to matrix_to_string: matrix needs to be a matrix_2d, not a " + typeof matrix,
+ "Incorrect call to matrix_to_string: matrix needs to be a matrix_2d, not a " +
+ typeof matrix,
);
// We also check the argument count
if (arguments.length != 1)
throw new SplashKitArgumentError(
- "Incorrect call to matrix_to_string: expects 1 parameters, not " + String(arguments.length),
+ "Incorrect call to matrix_to_string: expects 1 parameters, not " +
+ String(arguments.length),
);
// Now we allocate space on the stack. First we save the current stack address
@@ -126,8 +138,8 @@ function matrix_to_string(matrix) {
}
```
-Finally, we can have a look at the JavaScript class for matrix_2d, to get an idea of what 'write'
-does:
+Finally, we can have a look at the JavaScript class for matrix_2d, to get an
+idea of what 'write' does:
```javascript
class matrix_2d {
@@ -149,11 +161,13 @@ class matrix_2d {
static checkCPPMapping() {
assert(
0 == wasmExports.matrix_2d_elements_offset(),
- "Wrong offset! matrix_2d.elements| 0 != " + String(wasmExports.matrix_2d_elements_offset()),
+ "Wrong offset! matrix_2d.elements| 0 != " +
+ String(wasmExports.matrix_2d_elements_offset()),
);
assert(
72 == wasmExports.matrix_2d_size(),
- "Wrong class size! matrix_2d| 72 != " + String(wasmExports.matrix_2d_size()),
+ "Wrong class size! matrix_2d| 72 != " +
+ String(wasmExports.matrix_2d_size()),
);
}
@@ -181,22 +195,22 @@ class matrix_2d {
}
```
-The job of the new binding generator is to generate functions and classes like this for every
-function and every structure in the SplashKit API. It also has to handle generating functions for
-`#defines` such as `COLOR_WHITE`, and creating function pointers on the JavaScript side and passing
-them to the C++ side.
+The job of the new binding generator is to generate functions and classes like
+this for every function and every structure in the SplashKit API. It also has to
+handle generating functions for `#defines` such as `COLOR_WHITE`, and creating
+function pointers on the JavaScript side and passing them to the C++ side.
## The Binding Generation Code - Brief Overview
-Let's now have a brief look at how the binding generation code works. There is quite a lot to it,
-but here's an overview.
+Let's now have a brief look at how the binding generation code works. There is
+quite a lot to it, but here's an overview.
### Setup
-We start with `__main__` inside `generate_javascript_bindings_and_glue.py`, which takes as arguments
-the input SplashKit API json file, the output names for the C++/JS files respectively, and finally a
-true/false parameter that specifies whether to emulate function overloading in the output
-JavaScript.
+We start with `__main__` inside `generate_javascript_bindings_and_glue.py`,
+which takes as arguments the input SplashKit API json file, the output names for
+the C++/JS files respectively, and finally a true/false parameter that specifies
+whether to emulate function overloading in the output JavaScript.
First it reads in the API:
@@ -205,21 +219,23 @@ First it reads in the API:
api = read_json_api(api)
```
-Next it creates a 'TypeEnvironment', which includes information on all basic C++ types (such as int,
-float, etc), and also SplashKit structures, including size and offsets of members. The class can
-also be used to resolve typedef'd types, determine if a type is a primitive, and so on. The code for
-this class can be found inside `type_environment.py`, and here's how its used.
+Next it creates a 'TypeEnvironment', which includes information on all basic C++
+types (such as int, float, etc), and also SplashKit structures, including size
+and offsets of members. The class can also be used to resolve typedef'd types,
+determine if a type is a primitive, and so on. The code for this class can be
+found inside `type_environment.py`, and here's how its used.
```python
# Compute memory information about all the types
types_env = compute_type_memory_information(api)
```
-From here, if function overloading emulation is enabled, we modify the set of functions we will be
-generating code for. We detect all the functions that need to be considered overloads of each other,
-and then make the parameter names identical between them, based on the longest function. This makes
-the emulation later on possible, where we then detect the number of arguments and their types to
-dispatch the correct C++ function. Here's an example of what it does.
+From here, if function overloading emulation is enabled, we modify the set of
+functions we will be generating code for. We detect all the functions that need
+to be considered overloads of each other, and then make the parameter names
+identical between them, based on the longest function. This makes the emulation
+later on possible, where we then detect the number of arguments and their types
+to dispatch the correct C++ function. Here's an example of what it does.
```
The following overloads:
@@ -235,8 +251,8 @@ Draw Circle (clr: color, x: double, y: double, radius: double, )
Draw Circle (clr: color, x: double, y: double, radius: double, opts: drawing_options, )
```
-None of the types have been changed - purely the names of the parameters. This is the code that
-handles it:
+None of the types have been changed - purely the names of the parameters. This
+is the code that handles it:
```python
if enable_overloading:
@@ -244,7 +260,8 @@ if enable_overloading:
make_function_overloads_consistent(functions)
```
-and the function `make_function_overloads_consistent` is inside `glue_generation.js`
+and the function `make_function_overloads_consistent` is inside
+`glue_generation.js`
### Marshal Strategy Generation
@@ -255,10 +272,11 @@ The next line is pretty important
marshalled_functions = compute_marshalled_functions(types_env, functions)
```
-This `compute_marshalled_functions` is going to take the set of functions, and for each function,
-decide on and store the specific strategies it will use to pass each of its parameters and return
-value. Not much code is generated in this step, but it is by far the most important. Let's quickly
-look at the classes that store the data it computes:
+This `compute_marshalled_functions` is going to take the set of functions, and
+for each function, decide on and store the specific strategies it will use to
+pass each of its parameters and return value. Not much code is generated in this
+step, but it is by far the most important. Let's quickly look at the classes
+that store the data it computes:
```python
class MarshaledFunction:
@@ -271,8 +289,9 @@ class MarshaledFunction:
self.inouts = inouts
```
-The `MarshalledFunction` has a `name` (its generic name), a `unique_name` (a name used when function
-overloading is unavailable), and then `inouts` - which contain both parameters and return values.
+The `MarshalledFunction` has a `name` (its generic name), a `unique_name` (a
+name used when function overloading is unavailable), and then `inouts` - which
+contain both parameters and return values.
The `inouts` are all of type `MarshaledParam`, which looks as follows:
@@ -318,21 +337,22 @@ class MarshaledParam:
self.js_return = ""
```
-I won't explain every field - but hopefully it can be seen that it's storing information and small
-snippets of code that will be used when converting and transferring data across the C++<->JavaScript
-boundary. These types are all found inside `marshalling.py`, while the overall strategies/functions
-are found in `marshalling_strategies.py`
+I won't explain every field - but hopefully it can be seen that it's storing
+information and small snippets of code that will be used when converting and
+transferring data across the C++<->JavaScript boundary. These types are all
+found inside `marshalling.py`, while the overall strategies/functions are found
+in `marshalling_strategies.py`
Back to `compute_marshalled_functions`, all it does internally is call
-`calculate_marshalled_function` for each function, which then calls `marshal_parameter` for its
-parameters and return value.
+`calculate_marshalled_function` for each function, which then calls
+`marshal_parameter` for its parameters and return value.
-`marshal_parameter` then looks at the parameter passed in, and decides based on the parameter's
-type, whether it's a reference type, const, etc, how its going to be passed. These specific
-decisions are detailed in comments in the code.
+`marshal_parameter` then looks at the parameter passed in, and decides based on
+the parameter's type, whether it's a reference type, const, etc, how its going
+to be passed. These specific decisions are detailed in comments in the code.
-Finally, at the bottom of the function it then dispatches which specific method it will marshal the
-data with - we can see what that looks like here:
+Finally, at the bottom of the function it then dispatches which specific method
+it will marshal the data with - we can see what that looks like here:
```python
# Arrays require special handling
@@ -361,17 +381,18 @@ data with - we can see what that looks like here:
```
-The code for each of those functions is quite long and detailed, but almost every line has been
-commented to explain the decisions it is making, so feel free to have a look inside
-`marshalling_strategies.py`
+The code for each of those functions is quite long and detailed, but almost
+every line has been commented to explain the decisions it is making, so feel
+free to have a look inside `marshalling_strategies.py`
-After computing our `MarshalledFunction`s, we finally get to code generation output. C++ code
-generation utilities are found inside `cpp_code_gen.py`, while JavaScript code generation is found
-inside `js_code_gen.py`. They contain functions for generating struct definitions, function bodies,
-declarations, and so on, all based around `MarshalledFunction`.
+After computing our `MarshalledFunction`s, we finally get to code generation
+output. C++ code generation utilities are found inside `cpp_code_gen.py`, while
+JavaScript code generation is found inside `js_code_gen.py`. They contain
+functions for generating struct definitions, function bodies, declarations, and
+so on, all based around `MarshalledFunction`.
-`glue_generation.py` then uses these functions to assemble the final code, which is called via the
-following two lines in `__main__`
+`glue_generation.py` then uses these functions to assemble the final code, which
+is called via the following two lines in `__main__`
```python
# Generate all the final glue code
@@ -379,46 +400,52 @@ generate_cpp_glue(api, marshalled_functions, output_cpp)
generate_js_glue(types_env, api, marshalled_functions, output_js, enable_overloading)
```
-There are a few other files I haven't mentioned, so here's a complete listing of every file and its
-contents:
-
-- `generate_javascript_bindings_and_glue.py` - holds the terminal script that takes as input the
- SplashKit API along with the output file paths, and performs the binding.
-- `json_api_reader.py` - provides functions/classes to read in the SplashKit API and make it more
- convenient to process later on
-- `js_binding_gen/streaming_code_indenter.py` - a small class that is used to automatically indent
- the generated code.
-- `js_binding_gen/type_environment.py` - contains the `TypeEnvironment` class, used to query
- sizes/layouts of built-in and SplashKit specific types.
-- `js_binding_gen/marshalling.py` - contains the `MarshalledFunction` and `MarshalledParam` types,
- which contain all the information needed for final code generation.
-- `js_binding_gen/marshalling_strategies.py` - holds all the functions and logic needed to create
- the `MarshalledFunction`s.
-- `js_binding_gen/js_code_gen.py` - contains code generation functions for JavaScript; e.g function
- calls, class definitions, etc.
-- `js_binding_gen/cpp_code_gen.py` - contains code generation functions for C++; e.g function calls,
- declarations, etc
-- `js_binding_gen/glue_generation.py` - contains methods that use the code generation utilities to
- actually generate the final output code. Also handles function overload emulation.
-- `js_binding_gen/javascript_bindings_preamble.py` - contains lengthy code included in the generated
- JS/C++ that defines various utility functions used.
+There are a few other files I haven't mentioned, so here's a complete listing of
+every file and its contents:
+
+- `generate_javascript_bindings_and_glue.py` - holds the terminal script that
+ takes as input the SplashKit API along with the output file paths, and
+ performs the binding.
+- `json_api_reader.py` - provides functions/classes to read in the SplashKit API
+ and make it more convenient to process later on
+- `js_binding_gen/streaming_code_indenter.py` - a small class that is used to
+ automatically indent the generated code.
+- `js_binding_gen/type_environment.py` - contains the `TypeEnvironment` class,
+ used to query sizes/layouts of built-in and SplashKit specific types.
+- `js_binding_gen/marshalling.py` - contains the `MarshalledFunction` and
+ `MarshalledParam` types, which contain all the information needed for final
+ code generation.
+- `js_binding_gen/marshalling_strategies.py` - holds all the functions and logic
+ needed to create the `MarshalledFunction`s.
+- `js_binding_gen/js_code_gen.py` - contains code generation functions for
+ JavaScript; e.g function calls, class definitions, etc.
+- `js_binding_gen/cpp_code_gen.py` - contains code generation functions for C++;
+ e.g function calls, declarations, etc
+- `js_binding_gen/glue_generation.py` - contains methods that use the code
+ generation utilities to actually generate the final output code. Also handles
+ function overload emulation.
+- `js_binding_gen/javascript_bindings_preamble.py` - contains lengthy code
+ included in the generated JS/C++ that defines various utility functions used.
## Known Limitations
-There is only one known limitation currently. Functions that take a reference to a fundamental type,
-and return data to that reference, cannot be expressed in JavaScript. The only function this is
-known to affect is
+There is only one known limitation currently. Functions that take a reference to
+a fundamental type, and return data to that reference, cannot be expressed in
+JavaScript. The only function this is known to affect is
`point_2d closest_point_on_lines(const point_2d from_pt, const vector &lines, int &line_idx)`,
-as the number passed into `line_idx` will not change. There is no way to fix this in JavaScript at
-the present time - the best we could do is make the user wrap their 'int' into a temporary object,
-and then retrieve the updated value from that object after calling the function.
+as the number passed into `line_idx` will not change. There is no way to fix
+this in JavaScript at the present time - the best we could do is make the user
+wrap their 'int' into a temporary object, and then retrieve the updated value
+from that object after calling the function.
## Wrap-up
-The new bindings generator has exposed vastly more SplashKit API functionality for use in
-JavaScript. It fixes strange behaviors that the old bindings exhibited, that would have resulted in
-confusion, especially for beginner programmers. Finally, It simplifies the usage of SplashKit Online
-by making the JavaScript code look and behave almost identical to the equivalent C++ code, making it
-easier to follow existing guides and API documentation. While it introduces more technical
-complexity, it is a far more complete solution than the previous one, and should continue to prove
-useful as SplashKit Online develops.
+The new bindings generator has exposed vastly more SplashKit API functionality
+for use in JavaScript. It fixes strange behaviors that the old bindings
+exhibited, that would have resulted in confusion, especially for beginner
+programmers. Finally, It simplifies the usage of SplashKit Online by making the
+JavaScript code look and behave almost identical to the equivalent C++ code,
+making it easier to follow existing guides and API documentation. While it
+introduces more technical complexity, it is a far more complete solution than
+the previous one, and should continue to prove useful as SplashKit Online
+develops.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Processes/how-splashkit-online-runs-code.md b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Processes/how-splashkit-online-runs-code.md
index 6f564babb..2941a6cd5 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Processes/how-splashkit-online-runs-code.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Code Documentation/Processes/how-splashkit-online-runs-code.md
@@ -1,37 +1,42 @@
---
title: How SplashKit Online runs the user's code!
description:
- A detailed explanation as to all the steps SplashKit Online takes to execute the user's code.
+ A detailed explanation as to all the steps SplashKit Online takes to execute
+ the user's code.
---
## Introduction
-This document is a deep dive into how SplashKit Online runs the user's code. This is a multi-step
-process that will take us through much of SplashKit Online's code, so get ready!
+This document is a deep dive into how SplashKit Online runs the user's code.
+This is a multi-step process that will take us through much of SplashKit
+Online's code, so get ready!
## Overview
-Here's a _very_ brief overview of how it works. Don't worry if you don't understand what this means
-yet! Each part will be explained in due time - but feel free to use this as a reference of the
-overall process.
+Here's a _very_ brief overview of how it works. Don't worry if you don't
+understand what this means yet! Each part will be explained in due time - but
+feel free to use this as a reference of the overall process.
1. Before the user does anything...
1. The IDE starts up, and creates an ExecutionEnvironment.
2. The ExecutionEnvironment creates an iFrame, and loads SplashKit inside it.
-2. User writes code into the code editor (currently there are two 'code blocks', General and Main).
-3. User presses the Run button. First we have to run the code blocks, to create all the user's
- functions/classes and initialize global variables.
-4. Pressing run calls `ExecutionEnvironment.runCodeBlocks`, passing in the General Code and Main
- Code code blocks. For each code block: 1. The code block's text is sent as an argument to
- `ExecutionEnvironment.runCodeBlock(block, source)` 2. The source code gets syntax checked. 3. If
- it is syntactically correct, it is then sent as a `message` into the ExecutionEnvironment's
- iFrame.
+2. User writes code into the code editor (currently there are two 'code blocks',
+ General and Main).
+3. User presses the Run button. First we have to run the code blocks, to create
+ all the user's functions/classes and initialize global variables.
+4. Pressing run calls `ExecutionEnvironment.runCodeBlocks`, passing in the
+ General Code and Main Code code blocks. For each code block: 1. The code
+ block's text is sent as an argument to
+ `ExecutionEnvironment.runCodeBlock(block, source)` 2. The source code gets
+ syntax checked. 3. If it is syntactically correct, it is then sent as a
+ `message` into the ExecutionEnvironment's iFrame.
5. The following steps all happen inside the iFrame (for security purposes)
1. The iFrame receives the message.
2. The code is transformed to make it runnable within the environment
3. A real function is created from the transformed code.
4. **The code is run!**
-6. Now it needs to run the user's main: `ExecutionEnvironment.runProgram()` is called.
+6. Now it needs to run the user's main: `ExecutionEnvironment.runProgram()` is
+ called.
7. This sends a message into the iFrame.
8. The following steps all happen inside the iFrame (for security purposes)
1. The iFrame check if the user has created a `main()`
@@ -39,10 +44,11 @@ overall process.
:::note
-If you're wondering why the user's 'code blocks' get run, and only _then_ the user's main program
-gets run, here's why. JavaScript is a completely dynamic language, so unlike compiled languages like
-C++, functions and classes and so on aren't known ahead of time. Instead, the creation of a
-function/class itself is runtime code. The code
+If you're wondering why the user's 'code blocks' get run, and only _then_ the
+user's main program gets run, here's why. JavaScript is a completely dynamic
+language, so unlike compiled languages like C++, functions and classes and so on
+aren't known ahead of time. Instead, the creation of a function/class itself is
+runtime code. The code
```javascript
function myFunction() {
@@ -51,9 +57,11 @@ function myFunction() {
myFunction();
```
-is _run_, to create a function called `myFunction`, that can now be called later on.
+is _run_, to create a function called `myFunction`, that can now be called later
+on.
-In a similar way, functions themselves are just objects, and can be assigned as follows:
+In a similar way, functions themselves are just objects, and can be assigned as
+follows:
```javascript
let myFunction = function () {
@@ -62,11 +70,11 @@ let myFunction = function () {
myFunction();
```
-When we first run the user's code blocks, we are creating all their functions and classes and global
-variables.
+When we first run the user's code blocks, we are creating all their functions
+and classes and global variables.
-Only after this is done, can we then call `main()`, and start the program itself. But as you know
-now, in a way it was running the whole time.
+Only after this is done, can we then call `main()`, and start the program
+itself. But as you know now, in a way it was running the whole time.
:::
@@ -76,7 +84,9 @@ Looking inside `editorMain.js`
```javascript
// ------ Setup Project and Execution Environment ------
-let executionEnviroment = new ExecutionEnvironment(document.getElementById("ExecutionEnvironment"));
+let executionEnviroment = new ExecutionEnvironment(
+ document.getElementById("ExecutionEnvironment"),
+);
```
_from
@@ -87,22 +97,26 @@ First, an `ExecutionEnvironment` is created.
From the
[Source Code Documentation](/products/splashkit/documentation/splashkit-online/code-documentation/classes/execution-environment)
-> ExecutionEnvironment is a class designed to abstract out running the user's code, and also handle
-> the environment itself (such as resetting variables, preloading files, etc). It contains functions
-> to 'compile' user code, run the main program, reset itself, and create directories/files inside
-> the environment.
+> ExecutionEnvironment is a class designed to abstract out running the user's
+> code, and also handle the environment itself (such as resetting variables,
+> preloading files, etc). It contains functions to 'compile' user code, run the
+> main program, reset itself, and create directories/files inside the
+> environment.
-When created, an important thing it does is create an iFrame (sort of a page inside the page), which
-is where all code execution will take place. This is done for security, see
+When created, an important thing it does is create an iFrame (sort of a page
+inside the page), which is where all code execution will take place. This is
+done for security, see
[here](/products/splashkit/documentation/splashkit-online/code-documentation/classes/execution-environment/#why-create-an-iframe)
for a more detailed explanation.
-Inside the iFrame, the page `executionEnvironment.html` is loaded, which loads in things like the
-SplashKit library itself, and also the executionEnvironment internal scripts, like
-`executionEnvironment_Internal.js` and `executionEnvironment_CodeProcessor.js`
+Inside the iFrame, the page `executionEnvironment.html` is loaded, which loads
+in things like the SplashKit library itself, and also the executionEnvironment
+internal scripts, like `executionEnvironment_Internal.js` and
+`executionEnvironment_CodeProcessor.js`
-Once the environment finishes loading, it sends out an `initialized` event - this is when all the
-green buttons in the interface become usable, and code can be executed!
+Once the environment finishes loading, it sends out an `initialized` event -
+this is when all the green buttons in the interface become usable, and code can
+be executed!
## User writes their code, then presses run
@@ -126,16 +140,16 @@ _from
[editorMain.js - runProgram()](https://github.com/thoth-tech/SplashkitOnline/blob/ddb06cec6296d6de905ee0a90084a4c1a71c7a58/Browser_IDE/editorMain.js#L194C6-L194C6)_
1. First it clears the error lines from the code editors.
-2. Next, it calls `executionEnviroment.runCodeBlocks`, and gives it the two code blocks and the
- source code inside the code editors; this runs the user's code, which really means runs all the
- function/variable/class initialization.
-3. Finally it runs the program - this runs the user's `main` function. Let's look at step 2 more
- closely.
+2. Next, it calls `executionEnviroment.runCodeBlocks`, and gives it the two code
+ blocks and the source code inside the code editors; this runs the user's
+ code, which really means runs all the function/variable/class initialization.
+3. Finally it runs the program - this runs the user's `main` function. Let's
+ look at step 2 more closely.
## Pressing run calls `ExecutionEnvironment.runCodeBlocks`, passing in the General Code and Main Code code blocks
-We can see by looking at the source code, that `runCodeBlocks` just calls `runCodeBlock` for each
-block passed in.
+We can see by looking at the source code, that `runCodeBlocks` just calls
+`runCodeBlock` for each block passed in.
```javascript
runCodeBlocks(blocks){
@@ -166,14 +180,16 @@ runCodeBlock(block, source){
_from
[executionEnvironment.js](https://github.com/thoth-tech/SplashkitOnline/blob/main/Browser_IDE/executionEnvironment.js)_
-First thing it does is call the internal function `_syntaxCheckCode(block, source)`, which as the
-name says, will syntax check the code. The way this syntax checking works is somewhat complicated,
-but let's step through it.
+First thing it does is call the internal function
+`_syntaxCheckCode(block, source)`, which as the name says, will syntax check the
+code. The way this syntax checking works is somewhat complicated, but let's step
+through it.
### Some backstory (optional reading)
-Just as a precursor, in JavaScript there are multiple ways to execute code that the user provides as
-text. One way is to use the function `eval`, for example you can run
+Just as a precursor, in JavaScript there are multiple ways to execute code that
+the user provides as text. One way is to use the function `eval`, for example
+you can run
```javascript
eval("alert('Hello!');");
@@ -185,12 +201,13 @@ and this will pop up a box, as if you had directly run
alert("Hello!");
```
-This method combines syntax checking and running together - first the browser syntax checks the
-code, and then it runs it. However, we want to syntax check the code _before_ running it. The main
-way to do this, is to create a _`Function` object_ from the source code. The browser will syntax
-check the code when making it, without running it yet. As will be explained later, it turns out we
-actually _need_ to make a `Function` object anyway, for certain important features like pausing the
-code and allowing while loops.
+This method combines syntax checking and running together - first the browser
+syntax checks the code, and then it runs it. However, we want to syntax check
+the code _before_ running it. The main way to do this, is to create a
+_`Function` object_ from the source code. The browser will syntax check the code
+when making it, without running it yet. As will be explained later, it turns out
+we actually _need_ to make a `Function` object anyway, for certain important
+features like pausing the code and allowing while loops.
This can be as simple as
@@ -198,8 +215,9 @@ This can be as simple as
let myFunction = new Function("alert('Hello!');");
```
-However, we also need to be notified of any errors that occur, so we can tell the user about them.
-If you are familiar with JavaScript, you might suggest a `try/catch` block, like this:
+However, we also need to be notified of any errors that occur, so we can tell
+the user about them. If you are familiar with JavaScript, you might suggest a
+`try/catch` block, like this:
```javascript
try {
@@ -209,21 +227,23 @@ try {
}
```
-It 'tries' to create the new function, and if it fails, we catch the error. It turns out we can get
-the error message and line number from that `error`, so this seems like it will work. The problem
-with this, is that the actual 'error' that occurred, technically happened on the line where
-`new Function(...)` was called, and not the line inside the user's code, meaning the line number we
-get back is useless. So instead the method described next is what was used.
+It 'tries' to create the new function, and if it fails, we catch the error. It
+turns out we can get the error message and line number from that `error`, so
+this seems like it will work. The problem with this, is that the actual 'error'
+that occurred, technically happened on the line where `new Function(...)` was
+called, and not the line inside the user's code, meaning the line number we get
+back is useless. So instead the method described next is what was used.
### Syntax Checking
-The method used for syntax checking is to create a `Function` object from the user's source code,
-which lets us do the syntax check without running the code. For reasons that will be explained
-later, we actually create an `AsyncFunction`, which will let us run the code in a more flexible way
-later on.
+The method used for syntax checking is to create a `Function` object from the
+user's source code, which lets us do the syntax check without running the code.
+For reasons that will be explained later, we actually create an `AsyncFunction`,
+which will let us run the code in a more flexible way later on.
-To retrieve any syntax errors that might occur when checking, we listen to the main window's `error`
-event, which reports any errors that happen, and where they happened.
+To retrieve any syntax errors that might occur when checking, we listen to the
+main window's `error` event, which reports any errors that happen, and where
+they happened.
So the code to perform the syntax check looks a bit like this:
@@ -236,13 +256,15 @@ Create the function - if the syntax check fails, the "error" event will get call
Detach from the "error event"
```
-One important aspect of implementing this, is that inside the sandboxed iFrame, the information we
-get in the `error` event is very unhelpful - the line number is always 0, and the error message is
-very generic. Luckily, since we are just syntax checking (and not _running_) the code, we can just
-do the syntax check inside the main page instead of the iFrame - so this is what happens.
+One important aspect of implementing this, is that inside the sandboxed iFrame,
+the information we get in the `error` event is very unhelpful - the line number
+is always 0, and the error message is very generic. Luckily, since we are just
+syntax checking (and not _running_) the code, we can just do the syntax check
+inside the main page instead of the iFrame - so this is what happens.
-Once the code passes syntax checking, it is sent into the iFrame for the next steps. Let's have a
-look at the code running inside the iFrame that receives the code:
+Once the code passes syntax checking, it is sent into the iFrame for the next
+steps. Let's have a look at the code running inside the iFrame that receives the
+code:
```javascript
if (m.data.type == "RunCodeBlock") {
@@ -258,7 +280,11 @@ if (m.data.type == "RunCodeBlock") {
tryEvalSource(m.data.name, processedCode);
} catch (e) {
- ReportError(userCodeBlockIdentifier + m.data.name, "Unknown syntax error.", null);
+ ReportError(
+ userCodeBlockIdentifier + m.data.name,
+ "Unknown syntax error.",
+ null,
+ );
}
}
```
@@ -266,12 +292,13 @@ if (m.data.type == "RunCodeBlock") {
_from
[executionEnvironment_Internal.js](https://github.com/thoth-tech/SplashkitOnline/blob/ddb06cec6296d6de905ee0a90084a4c1a71c7a58/Browser_IDE/executionEnvironment_Internal.js#L248C10-L248C10)_
-Let's break this down. First, it tries to run `processCodeForExecutionEnvironment`, passing in the
-user's code and some other parameters. We'll see what that does in a moment, but for now, know that
-it takes the user's code, and _changes it_, to allow us to pause it, resume it, reset it, etc.
-Assuming it's successful, then we move to `tryEvalSource`, which makes a new `AsyncFunction` from
-this modified source code, and then runs it! Remember, these stages all take place securely inside
-the iFrame.
+Let's break this down. First, it tries to run
+`processCodeForExecutionEnvironment`, passing in the user's code and some other
+parameters. We'll see what that does in a moment, but for now, know that it
+takes the user's code, and _changes it_, to allow us to pause it, resume it,
+reset it, etc. Assuming it's successful, then we move to `tryEvalSource`, which
+makes a new `AsyncFunction` from this modified source code, and then runs it!
+Remember, these stages all take place securely inside the iFrame.
Let's look at how the code modification/transformation works, and why we do it.
@@ -281,8 +308,8 @@ Let's look at how the code modification/transformation works, and why we do it.
#### Why do we modify/transform the user's code?
-There are a couple of things that we want the user's code to be able to do, that's impossible to
-support without modifying their code.
+There are a couple of things that we want the user's code to be able to do,
+that's impossible to support without modifying their code.
##### We want them to be able to have infinite while loops
@@ -297,60 +324,69 @@ void main(){
}
```
-where it just loops and loops until the user quits. However, in a browser, JavaScript is executed on
-the same thread as the page. So normally the browser might do something like this:
+where it just loops and loops until the user quits. However, in a browser,
+JavaScript is executed on the same thread as the page. So normally the browser
+might do something like this:
1. Check for user input
2. Update the page
3. If the user clicks the button, **run some JavaScript**
4. Goto 1
-Which works fine if the 'run some JavaScript' part ends quickly. But if it enters a loop, like in
-the code above, then the browser won't be able to check for input or even update the page until the
-code ends - if it's an infinite loop like above, the page can only crash.
+Which works fine if the 'run some JavaScript' part ends quickly. But if it
+enters a loop, like in the code above, then the browser won't be able to check
+for input or even update the page until the code ends - if it's an infinite loop
+like above, the page can only crash.
-What's the solution? We modify loops inside the user's code, so that they give control _back_ to the
-browser periodically. This is done with JavaScript's `async` function support, and requires all user
-functions to be marked as `async`, to have calls to those functions marked with `await`, to have
-code inserted in every loop to handle the control passing, and to have user classes have some
-changes (since constructors can't be async).
+What's the solution? We modify loops inside the user's code, so that they give
+control _back_ to the browser periodically. This is done with JavaScript's
+`async` function support, and requires all user functions to be marked as
+`async`, to have calls to those functions marked with `await`, to have code
+inserted in every loop to handle the control passing, and to have user classes
+have some changes (since constructors can't be async).
Here are some more specific details (optional reading):
-- All loops automatically await a timeout of 0 seconds after executing for more than ~25ms.
-- screen_refresh (and other similar functions) await a window.requestAnimationFrame
+- All loops automatically await a timeout of 0 seconds after executing for more
+ than ~25ms.
+- screen_refresh (and other similar functions) await a
+ window.requestAnimationFrame
- All user functions are marked as async, so that they can use await.
- Similarly, all calls to user functions are marked with await.
-- Constructors cannot be async, so rename all constructors of user classes to `__constructor`, and
- call it when user classes are newed. `let player = new Player()` becomes
+- Constructors cannot be async, so rename all constructors of user classes to
+ `__constructor`, and call it when user classes are newed.
+ `let player = new Player()` becomes
`let player = (new Player()).__constructor()`
-This same setup is used to enable code pausing, and stopping, by simply listening for
-\*pause/stop/continue **flags\*** when it does the awaits. To stop, we simply throw a
-'ForceBreakLoop' error. To pause, we create a promise and await it. To continue, we call that
-promise.
+This same setup is used to enable code pausing, and stopping, by simply
+listening for \*pause/stop/continue **flags\*** when it does the awaits. To
+stop, we simply throw a 'ForceBreakLoop' error. To pause, we create a promise
+and await it. To continue, we call that promise.
-_Here's something important to note, for those wondering why we just don't use `eval` instead of
-putting the user's code in a new `Function` object. We couldn't do this transformation if we didn't
-put the user's code inside a function, because you cannot `eval` asynchronous code! Meaning the user
-couldn't write while loops, or any long running code at all!_
+_Here's something important to note, for those wondering why we just don't use
+`eval` instead of putting the user's code in a new `Function` object. We
+couldn't do this transformation if we didn't put the user's code inside a
+function, because you cannot `eval` asynchronous code! Meaning the user couldn't
+write while loops, or any long running code at all!_
##### We want the user to be able to declare global functions, variables, and classes in one block and be able to access them in another
-When we evaluate the user's code, we are technically sticking it inside a function, then running it.
-As such, the variables, functions and classes declared are actually scoped to that function, meaning
-they vanish once the function ends. This obviously isn't very helpful - the user couldn't define
-things in one code block, and use them in another, because they're in different scopes! In fact, we
-couldn't even run the user's main, since it would vanish just after the code that creates it
-finishes evaluating.
+When we evaluate the user's code, we are technically sticking it inside a
+function, then running it. As such, the variables, functions and classes
+declared are actually scoped to that function, meaning they vanish once the
+function ends. This obviously isn't very helpful - the user couldn't define
+things in one code block, and use them in another, because they're in different
+scopes! In fact, we couldn't even run the user's main, since it would vanish
+just after the code that creates it finishes evaluating.
-We could just combine the user's code together into a single piece that executes in the same scope,
-but then we couldn't have hot-reloading, where the user can update their code _while_ the program
-runs.
+We could just combine the user's code together into a single piece that executes
+in the same scope, but then we couldn't have hot-reloading, where the user can
+update their code _while_ the program runs.
-So what we do, is modify the user's code, so that declarations made inside the "global" scope, are
-manually assigned to the _real_ global scope outside the function that the user's code is written
-in. Just as an example, imagine the user has written the following code.
+So what we do, is modify the user's code, so that declarations made inside the
+"global" scope, are manually assigned to the _real_ global scope outside the
+function that the user's code is written in. Just as an example, imagine the
+user has written the following code.
General Code:
@@ -366,8 +402,8 @@ function main() {
}
```
-If we evaluated each block by putting the block's code directly into a new `Function` and running
-the function, it would be equivalent to the following:
+If we evaluated each block by putting the block's code directly into a new
+`Function` and running the function, it would be equivalent to the following:
```javascript
function GeneralCode() {
@@ -409,34 +445,39 @@ MainCode();
main();
```
-Notice how every time we define something that should be in the global scope, we assign it to
-`window`? This is (_one name for_) the global scope in JavaScript. So now the
-variables/functions/classes are actually in the global scope, and everything works as expected.
+Notice how every time we define something that should be in the global scope, we
+assign it to `window`? This is (_one name for_) the global scope in JavaScript.
+So now the variables/functions/classes are actually in the global scope, and
+everything works as expected.
##### We also want them to be able to restart their program without old variables and functions being left behind
-Now that we have the variables in the global scope, we have a problem. Let's say the user runs the
-program above once. They then remove the line of code defining `globalVariable`. If they restart
-their program, you'd expect that an error occurs when they reach the line
-`write_line(globalVariable);`, since `globalVariable` isn't defined right?
-
-But no error occurs! This is because, the global variable was already set the _first_ time they ran
-the program, and when they 'restarted' it, all we did was call `main()` again, meaning the global
-variable stayed in existence! We could fully reset the executionEnvironment with
-`resetEnvironment()`, but this takes a long time (up to 20 seconds), so doing this every time the
-user runs their code would be a poor user experience.
-
-Luckily, we already know what the global variables are - we already transform them after all. So
-what we can do is keep a list of them, and then when the user restarts the program, we can `delete`
-all the variables from the global `window` object, and then we get a clean run; hence the function
-`cleanEnvironment()` exists. Now when the user runs, they'll get an error as they should!
+Now that we have the variables in the global scope, we have a problem. Let's say
+the user runs the program above once. They then remove the line of code defining
+`globalVariable`. If they restart their program, you'd expect that an error
+occurs when they reach the line `write_line(globalVariable);`, since
+`globalVariable` isn't defined right?
+
+But no error occurs! This is because, the global variable was already set the
+_first_ time they ran the program, and when they 'restarted' it, all we did was
+call `main()` again, meaning the global variable stayed in existence! We could
+fully reset the executionEnvironment with `resetEnvironment()`, but this takes a
+long time (up to 20 seconds), so doing this every time the user runs their code
+would be a poor user experience.
+
+Luckily, we already know what the global variables are - we already transform
+them after all. So what we can do is keep a list of them, and then when the user
+restarts the program, we can `delete` all the variables from the global `window`
+object, and then we get a clean run; hence the function `cleanEnvironment()`
+exists. Now when the user runs, they'll get an error as they should!
#### How do we modify the code?
-While we could just modify the text as a string, this is error prone and kind of hacky. Instead, we
-use a JavaScript library called **Babel**, which parses the user's JavaScript, and creates what's
-called an AST (or abstract-syntax-tree), which lets us treat each part of the code as separate
-objects we can manipulate. For example:
+While we could just modify the text as a string, this is error prone and kind of
+hacky. Instead, we use a JavaScript library called **Babel**, which parses the
+user's JavaScript, and creates what's called an AST (or abstract-syntax-tree),
+which lets us treat each part of the code as separate objects we can manipulate.
+For example:
```javascript
let a = 10;
@@ -452,7 +493,8 @@ There's no need to understand this too deeply, but it's just good to know.
#### Putting it all together
-Now we have all the pieces needed to understand the `processCodeForExecutionEnvironment` function.
+Now we have all the pieces needed to understand the
+`processCodeForExecutionEnvironment` function.
```javascript
function processCodeForExecutionEnvironment(
@@ -487,18 +529,20 @@ function processCodeForExecutionEnvironment(
_from
[executionEnvironment_CodeProcessor.js](https://github.com/thoth-tech/SplashkitOnline/blob/ddb06cec6296d6de905ee0a90084a4c1a71c7a58/Browser_IDE/executionEnvironment_CodeProcessor.js#L275)_
-We can see it takes the user's code, and also some _names_ for the variables that will handle making
-the code stop/pause/continue - these are the _flags_ mentioned earlier. It also takes the name of a
-callback to call, when the user's code actually pauses.
-
-We can see the first thing it does is assign these to some variables - you can ignore that part for
-now, it's just an implementation detail (it doesn't seem possible to pass parameters into Babel
-transforms, so I just used global variables...). But after that, it calls Babel with the
-`"findGlobalDeclarationsTransform"`, this handles updating the list of global variables that we
-clear when restarting the program. Then we run it again with two more passes -
-`"makeFunctionsAsyncAwaitTransform"`, and `"asyncify"`, which handle making functions/calls
-async/await along with the scope changes, and inserting the yielding back to the browser during
-loops, respectively.
+We can see it takes the user's code, and also some _names_ for the variables
+that will handle making the code stop/pause/continue - these are the _flags_
+mentioned earlier. It also takes the name of a callback to call, when the user's
+code actually pauses.
+
+We can see the first thing it does is assign these to some variables - you can
+ignore that part for now, it's just an implementation detail (it doesn't seem
+possible to pass parameters into Babel transforms, so I just used global
+variables...). But after that, it calls Babel with the
+`"findGlobalDeclarationsTransform"`, this handles updating the list of global
+variables that we clear when restarting the program. Then we run it again with
+two more passes - `"makeFunctionsAsyncAwaitTransform"`, and `"asyncify"`, which
+handle making functions/calls async/await along with the scope changes, and
+inserting the yielding back to the browser during loops, respectively.
## A real function is created from the transformed code
@@ -514,9 +558,9 @@ processedCode = processCodeForExecutionEnvironment(
tryEvalSource(m.data.name, processedCode);
```
-Hopefully we now understand what the first line here does. Now we get to actually run the processed
-code! First we have to turn it into a real function, and this is exactly what `tryEvalSource` does
-first. Let's have a look inside:
+Hopefully we now understand what the first line here does. Now we get to
+actually run the processed code! First we have to turn it into a real function,
+and this is exactly what `tryEvalSource` does first. Let's have a look inside:
```javascript
async function tryEvalSource(block, source) {
@@ -532,50 +576,64 @@ async function tryEvalSource(block, source) {
_from
[executionEnvironment_Internal.js](https://github.com/thoth-tech/SplashkitOnline/blob/ddb06cec6296d6de905ee0a90084a4c1a71c7a58/Browser_IDE/executionEnvironment_Internal.js#L191)_
-As can be seen, the first thing that happens is that we call `createEvalFunctionAndSyntaxCheck`,
-which does exactly what it says. You'll notice we're syntax checking here as well - this isn't
-exactly deliberate, it just happens automatically when the `Function` object is created. Still, it's
-helpful if the Babel output had a syntax error, for instance. The important part is inside
-`createEvalFunctionAndSyntaxCheck`, here:
+As can be seen, the first thing that happens is that we call
+`createEvalFunctionAndSyntaxCheck`, which does exactly what it says. You'll
+notice we're syntax checking here as well - this isn't exactly deliberate, it
+just happens automatically when the `Function` object is created. Still, it's
+helpful if the Babel output had a syntax error, for instance. The important part
+is inside `createEvalFunctionAndSyntaxCheck`, here:
```javascript
return Object.getPrototypeOf(async function () {}).constructor(
- '"use strict";' + source + "\n//# sourceURL=" + userCodeBlockIdentifier + block,
+ '"use strict";' +
+ source +
+ "\n//# sourceURL=" +
+ userCodeBlockIdentifier +
+ block,
);
```
_from
[executionEnvironment_Internal.js](https://github.com/thoth-tech/SplashkitOnline/blob/ddb06cec6296d6de905ee0a90084a4c1a71c7a58/Browser_IDE/executionEnvironment_Internal.js#L179C44-L179C44)_
-Here's where the user's code _finally_ becomes a real function, that will actually be called! Notice
-it looks a little different to the `new Function("...")` example earlier. This is because, it's
-creating an `AsyncFunction`, which doesn't have a nice constructor, so we access it directly. The
-`AsyncFunction` is important, because all of that work we did before modifying the user's code to
-give control back to the browser when it loops, won't work without it being an `AsyncFunction`!
-
-You'll also notice that we modify the user's code slightly; we don't just pass `source` directly, we
-add `"use strict";` at the start, and `//# sourceURL=...` at the end. What do these do?
-
-- `"use strict;"` makes the user's JavaScript code execute in strict mode, which tidies up a lot of
- the language's semantics, forces variable declarations to be explicit, and overall improves code
- quality and makes errors easier to track down. We couldn't turn on `"use strict";` without the
- manual scoping fixes either!
-- `//# sourceURL=...` tells the browser what 'source file' the code is from. This means that when
- the browser reports an error, we'll be able to tell what code block it came from! Notice we add
- `userCodeBlockIdentifier` at the start? This is just a short string that we can use to help us
- tell if an error came from user code, or if it came from code in the IDE itself. An example might
- look like this `//# sourceURL=__USERCODE__MainCode`, and so if an error occurs, we will see it
- came from `__USERCODE__MainCode`, and tell the user it came from their "Main Code" block!
-
-Now we can finally call this function to run the user's code! Remember, this won't run their
-_program_ but it will run the code which creates all their functions, global variables, classes, and
-of course their `main()` function. Actually running the code happens inside `tryRunFunction`, and
-we'll look at that in just a short bit. But just know now that the code has been run (or failed with
-an error); let's assume it successfully ran, and so we can actually run the user's `main`!
+Here's where the user's code _finally_ becomes a real function, that will
+actually be called! Notice it looks a little different to the
+`new Function("...")` example earlier. This is because, it's creating an
+`AsyncFunction`, which doesn't have a nice constructor, so we access it
+directly. The `AsyncFunction` is important, because all of that work we did
+before modifying the user's code to give control back to the browser when it
+loops, won't work without it being an `AsyncFunction`!
+
+You'll also notice that we modify the user's code slightly; we don't just pass
+`source` directly, we add `"use strict";` at the start, and `//# sourceURL=...`
+at the end. What do these do?
+
+- `"use strict;"` makes the user's JavaScript code execute in strict mode, which
+ tidies up a lot of the language's semantics, forces variable declarations to
+ be explicit, and overall improves code quality and makes errors easier to
+ track down. We couldn't turn on `"use strict";` without the manual scoping
+ fixes either!
+- `//# sourceURL=...` tells the browser what 'source file' the code is from.
+ This means that when the browser reports an error, we'll be able to tell what
+ code block it came from! Notice we add `userCodeBlockIdentifier` at the start?
+ This is just a short string that we can use to help us tell if an error came
+ from user code, or if it came from code in the IDE itself. An example might
+ look like this `//# sourceURL=__USERCODE__MainCode`, and so if an error
+ occurs, we will see it came from `__USERCODE__MainCode`, and tell the user it
+ came from their "Main Code" block!
+
+Now we can finally call this function to run the user's code! Remember, this
+won't run their _program_ but it will run the code which creates all their
+functions, global variables, classes, and of course their `main()` function.
+Actually running the code happens inside `tryRunFunction`, and we'll look at
+that in just a short bit. But just know now that the code has been run (or
+failed with an error); let's assume it successfully ran, and so we can actually
+run the user's `main`!
## Now it needs to run the user's main
-If we recall, this all started with the user pressing the Run button, which looked like this:
+If we recall, this all started with the user pressing the Run button, which
+looked like this:
```javascript
clearErrorLines();
@@ -594,18 +652,24 @@ executionEnviroment.runProgram();
_from
[editorMain.js - runProgram()](https://github.com/thoth-tech/SplashkitOnline/blob/ddb06cec6296d6de905ee0a90084a4c1a71c7a58/Browser_IDE/editorMain.js#L194C6-L194C6)_
-We now know what `runAllCodeBlocks` does quite well - it syntax checks the code, sends it to the
-iFrame, the code gets transformed, stuffed into a function, and then run! So what does
-`executionEnviroment.runProgram()` do? It's comparatively _much_ simpler!
+We now know what `runAllCodeBlocks` does quite well - it syntax checks the code,
+sends it to the iFrame, the code gets transformed, stuffed into a function, and
+then run! So what does `executionEnviroment.runProgram()` do? It's comparatively
+_much_ simpler!
-First thing it does is send a message to the iFrame, telling it to run the program - we definitely
-don't want to run the program in the main page, so this is all secured inside the iFrame, like the
-execution earlier. Upon receiving this message, it then calls its own internal `runProgram()`
+First thing it does is send a message to the iFrame, telling it to run the
+program - we definitely don't want to run the program in the main page, so this
+is all secured inside the iFrame, like the execution earlier. Upon receiving
+this message, it then calls its own internal `runProgram()`
```javascript
async function runProgram() {
if (window.main === undefined || !(window.main instanceof Function)) {
- ReportError(userCodeBlockIdentifier + "Program", "There is no main() function to run!", null);
+ ReportError(
+ userCodeBlockIdentifier + "Program",
+ "There is no main() function to run!",
+ null,
+ );
return;
}
if (!mainIsRunning) {
@@ -631,22 +695,23 @@ First, it checks to see if the main program even exists:
if (window.main === undefined || !(window.main instanceof Function))
```
-We can see how it's just checking the 'global' scope of `window` - which is the same one we know the
-user's functions get assigned to! So if the use created a `main` function, we'll be able to find it.
-We also make sure it _is_ actually a function, and that they didn't do something like
-`let main = 10;`
+We can see how it's just checking the 'global' scope of `window` - which is the
+same one we know the user's functions get assigned to! So if the use created a
+`main` function, we'll be able to find it. We also make sure it _is_ actually a
+function, and that they didn't do something like `let main = 10;`
-Next we make sure it isn't already running. If it was, we could end up with `main()` running
-multiple times simultaneously, not ideal!
+Next we make sure it isn't already running. If it was, we could end up with
+`main()` running multiple times simultaneously, not ideal!
```javascript
if (!mainIsRunning){
mainLoopStop = false;
```
-If it wasn't already running, it's time to start it! First turn off the `mainLoopStop` flag.
-Remember the async control flags mentioned earlier - this is one of them! If it's `true`, the
-program will stop as soon as it can, so we make sure it's `false`.
+If it wasn't already running, it's time to start it! First turn off the
+`mainLoopStop` flag. Remember the async control flags mentioned earlier - this
+is one of them! If it's `true`, the program will stop as soon as it can, so we
+make sure it's `false`.
```javascript
mainIsRunning = true;
@@ -656,35 +721,43 @@ mainIsRunning = false;
parent.postMessage({ type: "programStopped" }, "*");
```
-Now we set `mainIsRunning` to `true` (so that we can't start it multiple times at the same time),
-and post a message to the outside window `"programStarted"` - there's a listener in the main page
-that will then change the green buttons accordingly.
+Now we set `mainIsRunning` to `true` (so that we can't start it multiple times
+at the same time), and post a message to the outside window `"programStarted"` -
+there's a listener in the main page that will then change the green buttons
+accordingly.
-Finally, the moment of truth: `await tryRunFunction(window.main);` We run the program! It's called
-with `await`, which means that the code will _wait_ for it to finish before continuing. Remember we
-made all the user functions `async`? This allows them to give control back to the browser
-momentarily, but it also means that they can't stop things that call them from continuing to the
-next line of code - so we `await` to make sure we wait for the program to completely stop.
+Finally, the moment of truth: `await tryRunFunction(window.main);` We run the
+program! It's called with `await`, which means that the code will _wait_ for it
+to finish before continuing. Remember we made all the user functions `async`?
+This allows them to give control back to the browser momentarily, but it also
+means that they can't stop things that call them from continuing to the next
+line of code - so we `await` to make sure we wait for the program to completely
+stop.
-Once it does finally end (which will happen if we set `mainLoopStop` to `true`), we set
-`mainIsRunning` back to `false`, so the user can start it again, and then post a message back to the
-main window `"programStopped"`, which will again update the buttons accordingly.
+Once it does finally end (which will happen if we set `mainLoopStop` to `true`),
+we set `mainIsRunning` back to `false`, so the user can start it again, and then
+post a message back to the main window `"programStopped"`, which will again
+update the buttons accordingly.
### `tryRunFunction(func)` - what does it do?
-The responsibility of `tryRunFunction` - which is used when running the code blocks earlier as well,
-is to run the user's code, and then detect when it has errors and report them to the user.
+The responsibility of `tryRunFunction` - which is used when running the code
+blocks earlier as well, is to run the user's code, and then detect when it has
+errors and report them to the user.
-These aren't syntax errors in this case, these are runtime errors (for instance if the user tries to
-call a function that doesn't exist, or access outside the bounds of an array), and so we go about
-detecting them in a way a bit different to the syntax errors before.
+These aren't syntax errors in this case, these are runtime errors (for instance
+if the user tries to call a function that doesn't exist, or access outside the
+bounds of an array), and so we go about detecting them in a way a bit different
+to the syntax errors before.
-And after all, we can't use the window's `error` callback for the same reasons mentioned earlier -
-inside the iFrame, the error message is generic, and line number reported is always 0! And we
-certainly can't run the code outside the iFrame, or that would defeat the entire point of having it.
+And after all, we can't use the window's `error` callback for the same reasons
+mentioned earlier - inside the iFrame, the error message is generic, and line
+number reported is always 0! And we certainly can't run the code outside the
+iFrame, or that would defeat the entire point of having it.
-If we look inside `tryRunFunction`, we'll see it actually ends up calling `tryRunFunction_Internal`,
-which is a bit more interesting. Here's a simplified version:
+If we look inside `tryRunFunction`, we'll see it actually ends up calling
+`tryRunFunction_Internal`, which is a bit more interesting. Here's a simplified
+version:
```javascript
async function tryRunFunction_Internal(func) {
@@ -705,27 +778,30 @@ async function tryRunFunction_Internal(func) {
_from
[executionEnvironment_Internal.js](https://github.com/thoth-tech/SplashkitOnline/blob/ddb06cec6296d6de905ee0a90084a4c1a71c7a58/Browser_IDE/executionEnvironment_Internal.js#L138)_
-We can see it takes the user's function (for instance, the user's `main()`, or the `AsyncFunctions`
-we made from their code blocks), and tries to run it. It waits for it to finish with `await`, and if
-it finishes without issues, it returns "success!".
+We can see it takes the user's function (for instance, the user's `main()`, or
+the `AsyncFunctions` we made from their code blocks), and tries to run it. It
+waits for it to finish with `await`, and if it finishes without issues, it
+returns "success!".
-However, if an error was thrown, we catch it. If it was a `ForceBreakLoop` error, then we know it
-threw it because the user pressed the Stop button, not because it crashed, and so we just report
-back that it "Stopped". However, if that didn't happen, we figure out information about the error
-(such as its line number and what code block it happened in) with `parseErrorStack(err)`, and then
+However, if an error was thrown, we catch it. If it was a `ForceBreakLoop`
+error, then we know it threw it because the user pressed the Stop button, not
+because it crashed, and so we just report back that it "Stopped". However, if
+that didn't happen, we figure out information about the error (such as its line
+number and what code block it happened in) with `parseErrorStack(err)`, and then
return information about the error.
-This information is received by the original `tryRunFunction`, and if an error occurred it reports
-it to the user via `ReportError`.
+This information is received by the original `tryRunFunction`, and if an error
+occurred it reports it to the user via `ReportError`.
Let's take a closer look at `parseErrorStack`, as the last stop on our journey.
### parseErrorStack - what does _it_ do?
-Once we catch an error, the problem becomes "how do we report it to the user?" We need to give them
-the error message, and at least a line number and code block to look at. If the error message had
-members like `err.lineNumber` or `err.fileName` it'd be great, but they don't (unless you're using
-Firefox...). However, all modern browsers support `err.stack`, which gives us a piece of text
+Once we catch an error, the problem becomes "how do we report it to the user?"
+We need to give them the error message, and at least a line number and code
+block to look at. If the error message had members like `err.lineNumber` or
+`err.fileName` it'd be great, but they don't (unless you're using Firefox...).
+However, all modern browsers support `err.stack`, which gives us a piece of text
describing the error and where it happened. It looks a bit like this:
```javascript
@@ -738,29 +814,33 @@ runProgram@http://localhost:8000/executionEnvironment_Internal.js:132:15
EventListener.handleEvent*@http://localhost:8000/executionEnvironment_Internal.js:144:8
```
-We can see on each line, the function, filename, line number, and even column number! The problem,
-is that `stack` is actually non-standardised JavaScript, and so each browser implements it slightly
-differently. Additionally, we still have to actually parse (read) the string, to get all the
-information out of it. This is the job that `parseErrorStack` performs.
+We can see on each line, the function, filename, line number, and even column
+number! The problem, is that `stack` is actually non-standardised JavaScript,
+and so each browser implements it slightly differently. Additionally, we still
+have to actually parse (read) the string, to get all the information out of it.
+This is the job that `parseErrorStack` performs.
-The actual method isn't that complicated. It uses a regex that is designed to work across both
-Firefox and Chrome based browsers (including Edge), that reads out the file name and line number. It
-then returns these! Not too hard overall. One thing to note, is there are two lines inside
-`parseErrorStack` that might be confusing:
+The actual method isn't that complicated. It uses a regex that is designed to
+work across both Firefox and Chrome based browsers (including Edge), that reads
+out the file name and line number. It then returns these! Not too hard overall.
+One thing to note, is there are two lines inside `parseErrorStack` that might be
+confusing:
```javascript
-if (file.startsWith(userCodeBlockIdentifier)) lineNumber -= userCodeStartLineOffset;
+if (file.startsWith(userCodeBlockIdentifier))
+ lineNumber -= userCodeStartLineOffset;
```
_from
[executionEnvironment_Internal.js - parseErrorStack](https://github.com/thoth-tech/SplashkitOnline/blob/ddb06cec6296d6de905ee0a90084a4c1a71c7a58/Browser_IDE/executionEnvironment_Internal.js#L123)_
-Once we have extracted the line number, we check to see if the file name starts with the
-`userCodeBlockIdentifier` (remember this from earlier, when we added the `//# sourceURL=` to the
-user's code to help identify it?). If it starts with this, we know it's user code. And then we
-subtract `userCodeStartLineOffset` from it. Why do we do that? The answer is that when we create the
-`AsyncFunction` object, Firefox actually adds some lines to the start. For example, let's say we
-create a simple function from text:
+Once we have extracted the line number, we check to see if the file name starts
+with the `userCodeBlockIdentifier` (remember this from earlier, when we added
+the `//# sourceURL=` to the user's code to help identify it?). If it starts with
+this, we know it's user code. And then we subtract `userCodeStartLineOffset`
+from it. Why do we do that? The answer is that when we create the
+`AsyncFunction` object, Firefox actually adds some lines to the start. For
+example, let's say we create a simple function from text:
```javascript
let myFunc = new Function("console.log('Hi!');");
@@ -780,35 +860,39 @@ function anonymous() {
}
```
-See how there are two extra lines at the start? When the ExecutionEnvironment starts, it actually
-detects how many lines the browser adds at the start, and stores it inside
-`userCodeStartLineOffset` - so in Firefox, `userCodeStartLineOffset` is equal to `2`. Subtracting
-this from `lineNumber` then gives us the _actual_ line number of the error, so that we can highlight
-it in the user's code editor.
+See how there are two extra lines at the start? When the ExecutionEnvironment
+starts, it actually detects how many lines the browser adds at the start, and
+stores it inside `userCodeStartLineOffset` - so in Firefox,
+`userCodeStartLineOffset` is equal to `2`. Subtracting this from `lineNumber`
+then gives us the _actual_ line number of the error, so that we can highlight it
+in the user's code editor.
## Recap
-Hopefully having read all of that, you have a decent understanding of the steps SplashKit Online
-takes to run the user's code! As a recap, let's have one more look at the overview, which hopefully
-makes a lot more sense now.
+Hopefully having read all of that, you have a decent understanding of the steps
+SplashKit Online takes to run the user's code! As a recap, let's have one more
+look at the overview, which hopefully makes a lot more sense now.
1. Before the user does anything...
1. The IDE starts up, and creates an ExecutionEnvironment.
2. The ExecutionEnvironment creates an iFrame, and loads SplashKit inside it.
-2. User writes code into the code editor (currently there are two 'code blocks', General and Main).
-3. User presses the Run button. First we have to run the code blocks, to create all the user's
- functions/classes and initialize global variables.
-4. Pressing run calls `ExecutionEnvironment.runCodeBlocks`, passing in the General Code and Main
- Code code blocks. For each code block: 1. The code block's text is sent as an argument to
- `ExecutionEnvironment.runCodeBlock(block, source)` 2. The source code gets syntax checked. 3. If
- it is syntactically correct, it is then sent as a `message` into the ExecutionEnvironment's
- iFrame.
+2. User writes code into the code editor (currently there are two 'code blocks',
+ General and Main).
+3. User presses the Run button. First we have to run the code blocks, to create
+ all the user's functions/classes and initialize global variables.
+4. Pressing run calls `ExecutionEnvironment.runCodeBlocks`, passing in the
+ General Code and Main Code code blocks. For each code block: 1. The code
+ block's text is sent as an argument to
+ `ExecutionEnvironment.runCodeBlock(block, source)` 2. The source code gets
+ syntax checked. 3. If it is syntactically correct, it is then sent as a
+ `message` into the ExecutionEnvironment's iFrame.
5. The following steps all happen inside the iFrame (for security purposes)
1. The iFrame receives the message.
2. The code is transformed to make it runnable within the environment
3. A real function is created from the transformed code.
4. **The code is run!**
-6. Now it needs to run the user's main: `ExecutionEnvironment.runProgram()` is called.
+6. Now it needs to run the user's main: `ExecutionEnvironment.runProgram()` is
+ called.
7. This sends a message into the iFrame.
8. The following steps all happen inside the iFrame (for security purposes)
1. The iFrame check if the user has created a `main()`
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Research and Findings/api-support-tests.md b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Research and Findings/api-support-tests.md
index ab2a8b5bb..a93571ec2 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Research and Findings/api-support-tests.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Research and Findings/api-support-tests.md
@@ -1,28 +1,31 @@
---
title: API Support Tests
description:
- The results of running tests to check support for parts of the SplashKit API, across the two
- currently supported languages.
+ The results of running tests to check support for parts of the SplashKit API,
+ across the two currently supported languages.
---
# Report on SplashKit API functionality in SplashKit Online
### Overview
-While much of the SplashKit API already works in browsers thanks to Emscripten, there are still
-areas of functionality that do not. This report will outline what is working, what isn't, and the
-general reason why.
+While much of the SplashKit API already works in browsers thanks to Emscripten,
+there are still areas of functionality that do not. This report will outline
+what is working, what isn't, and the general reason why.
### SplashKit Tests
-It was decided that the most efficient way to test SplashKit's functionality was to use the existing
-suite of tests that exist inside `splashkit-core`. To test the JavaScript language backend, these
-tests had to be converted. To assist with this, a small C++ to JavaScript conversion utility was
-written; the result of this was then patched up manually. For C++, a few of the tests had to be
-slightly modified, but all in all are practically identical to their original source.
+It was decided that the most efficient way to test SplashKit's functionality was
+to use the existing suite of tests that exist inside `splashkit-core`. To test
+the JavaScript language backend, these tests had to be converted. To assist with
+this, a small C++ to JavaScript conversion utility was written; the result of
+this was then patched up manually. For C++, a few of the tests had to be
+slightly modified, but all in all are practically identical to their original
+source.
-The project file containing these tests will be added to the SplashKit Online DemoProjects folder
-for reproducibility. Here are the results grouped by API category.
+The project file containing these tests will be added to the SplashKit Online
+DemoProjects folder for reproducibility. Here are the results grouped by API
+category.
| Field | JavaScript | C++ | Details |
| ---------------- | ----------------- | ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
@@ -46,8 +49,8 @@ for reproducibility. Here are the results grouped by API category.
| Utilities | Near Full Support | Full Support | [Display Dialog](https://splashkit.io/api/utilities/#display-dialog) does not work in JavaScript backend, as it enters a busy loop that freezes the page. |
| Windows | Partial Support | Partial Support | No support for multiple windows, or for moving the window. No way to close the current window. |
-And here are the results specific to each test - some tests test multiple things unfortunately, so
-some of these results aren't very helpful.
+And here are the results specific to each test - some tests test multiple things
+unfortunately, so some of these results aren't very helpful.
| Test | JavaScript | C++ | Details |
| ------------------- | ---------------- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Research and Findings/splashkit-online-research-outcome.md b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Research and Findings/splashkit-online-research-outcome.md
index d7fc0a0bf..721660feb 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Research and Findings/splashkit-online-research-outcome.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Research and Findings/splashkit-online-research-outcome.md
@@ -10,9 +10,10 @@ title: SplashKit Online Research Spike Outcome
## Goals / Deliverables
-The goal of this spike was to investigate whether Emscripten and Emception could be used to compile
-and run SplashKit online via WASM, and in doing so produce this report. In the process, a fork of
-SplashKit-Core was made to improve reproducibility. This can be found here:
+The goal of this spike was to investigate whether Emscripten and Emception could
+be used to compile and run SplashKit online via WASM, and in doing so produce
+this report. In the process, a fork of SplashKit-Core was made to improve
+reproducibility. This can be found here:
https://github.com/WhyPenguins/splashkit-core/tree/EmscriptenTest
## Technologies, Tools, and Resources used
@@ -28,69 +29,78 @@ https://github.com/WhyPenguins/splashkit-core/tree/EmscriptenTest
## Tasks undertaken
-Here are the key tasks that were performed to produce the main results. The actual path taken took a
-bit more research and experimentation.
+Here are the key tasks that were performed to produce the main results. The
+actual path taken took a bit more research and experimentation.
### Testing Emscripten
- Installed and activated Emscripten using emsdk (see
https://emscripten.org/docs/getting_started/downloads.html)
- Tried compiling some simple SDL code (such as that found here
- https://blog.conan.io/2023/07/20/introduction-to-game-dev-with-sdl2.html) with the command
+ https://blog.conan.io/2023/07/20/introduction-to-game-dev-with-sdl2.html) with
+ the command
`emcc -sUSE_SDL=2 -sUSE_SDL_IMAGE=2 -sSDL2_IMAGE_FORMATS='["bmp","png","xpm"]' sdl_test.cpp -o sdl_test.html`.
- Emscripten already has ports for many libraries such as SDL, which was found out about here
- (https://emscripten.org/docs/compiling/Building-Projects.html)
+ Emscripten already has ports for many libraries such as SDL, which was found
+ out about here (https://emscripten.org/docs/compiling/Building-Projects.html)
- Ran a python web server with `python -m http.server`
- Navigated to localhost:8000 - the program was running in the browser.
### Compiling SplashKit to WASM
-- This took a few changes to SplashKit's source code. To make this easier to reproduce, a fork of
- SplashKit-Core has been created that has a branch with the changes required to make SplashKit
- compile a simple example under Emscripten. See here:
+- This took a few changes to SplashKit's source code. To make this easier to
+ reproduce, a fork of SplashKit-Core has been created that has a branch with
+ the changes required to make SplashKit compile a simple example under
+ Emscripten. See here:
https://github.com/WhyPenguins/splashkit-core/tree/EmscriptenTest
#### The following is a brief list of changes
- Cloned SplashKit-Core
- Modified CMakeLists.txt as follows: - Appended `set(CMAKE_C_COMPILER "emcc") `
- `set(CMAKE_CXX_COMPILER "emcc")` at the top. - Appended `-sUSE_SDL=2` to the make flags. -
- Appended the following to be linked: - `-sUSE_SDL=2` - `-sUSE_SDL_TTF=2` - `-sUSE_SDL_GFX=2` -
- `-sUSE_SDL_NET=2` - `-sUSE_SDL_MIXER=2` - `-sUSE_SDL_IMAGE=2` -
- `-sSDL2_IMAGE_FORMATS='["bmp","png","xpm"]'` and removed any existing duplicates. - Modified a few
- of the files and dependencies to either use Windows or Linux headers depending on what they
- required (perhaps the build environment was unusual). - Commented out tests
+ `set(CMAKE_CXX_COMPILER "emcc")` at the top. - Appended `-sUSE_SDL=2` to the
+ make flags. - Appended the following to be linked: - `-sUSE_SDL=2` -
+ `-sUSE_SDL_TTF=2` - `-sUSE_SDL_GFX=2` - `-sUSE_SDL_NET=2` -
+ `-sUSE_SDL_MIXER=2` - `-sUSE_SDL_IMAGE=2` -
+ `-sSDL2_IMAGE_FORMATS='["bmp","png","xpm"]'` and removed any existing
+ duplicates. - Modified a few of the files and dependencies to either use
+ Windows or Linux headers depending on what they required (perhaps the build
+ environment was unusual). - Commented out tests
- Modified web_driver.cpp and terminal.cpp so they were stubs without includes.
- At this point running `cmake -G "Unix Makefiles" . && make` built.
- To test functionality simply, the code from the starting tutorial
- (https://splashkit.io/articles/guides/tags/starter/get-started-drawing/) was brought across and
- replaced the Tests in the test folder. The test target in the makefile was modified to output this
- test. `set(CMAKE_EXECUTABLE_SUFFIX ".html")` was also important to make it output properly.
-- From here, the Python webserver was started in the output directory, and the starting tutorial
- could be ran in the browser.
+ (https://splashkit.io/articles/guides/tags/starter/get-started-drawing/) was
+ brought across and replaced the Tests in the test folder. The test target in
+ the makefile was modified to output this test.
+ `set(CMAKE_EXECUTABLE_SUFFIX ".html")` was also important to make it output
+ properly.
+- From here, the Python webserver was started in the output directory, and the
+ starting tutorial could be ran in the browser.
### Compiling Emception
- First Docker was installed, and WSL2 setup.
- Next, Emception was cloned and built following the instructions
(https://github.com/jprendes/emception)
-- Unfortunately, a number of issues were encountered. Compiling LLVM took approximately 16GB of RAM,
- and so the VM's RAM and swap limits needed to be adjusted; otherwise the compilation process was
- killed. It also took approximately a day.
-- Compilation errors were encountered later on. These have been reported already on the repository
- (https://github.com/jprendes/emception/issues/24), and no fix nor work around has been proposed
- yet. In order to not spend too long, Emception was shelved for now to work on interfacing
- SplashKit with Javascript.
+- Unfortunately, a number of issues were encountered. Compiling LLVM took
+ approximately 16GB of RAM, and so the VM's RAM and swap limits needed to be
+ adjusted; otherwise the compilation process was killed. It also took
+ approximately a day.
+- Compilation errors were encountered later on. These have been reported already
+ on the repository (https://github.com/jprendes/emception/issues/24), and no
+ fix nor work around has been proposed yet. In order to not spend too long,
+ Emception was shelved for now to work on interfacing SplashKit with
+ Javascript.
### Using SplashKit as a Library in Javascript
-- There were three different approaches that could be taken - each one was tested along with
- pros/cons examined.
+- There were three different approaches that could be taken - each one was
+ tested along with pros/cons examined.
- First step was to test 'cwrap'ing.
- The Main function was renamed, and wrapped in `extern "C"`
- An additional 'rerender' function was added, to test multiple calls.
- `-sEXPORTED_RUNTIME_METHODS=ccall,cwrap` was added to the makefile
-- From here, the file was loaded in the browser, and the following executed on the brower's console:
+- From here, the file was loaded in the browser, and the following executed on
+ the brower's console:
```
start_main = Module.cwrap('start_main', 'number', [])
@@ -99,11 +109,13 @@ different_render = Module.cwrap('different_render', 'number', [])
different_render()
```
-- This method worked easily, however wrapping create_window immediately posed issues as it takes a
- C++ string (not a primitive), and also returns something other than a primitive. Methods involving
- manual allocation were investigated, but instead the two binding implementations Embind and WebIDL
- Binder seemed more promising.
-- Embind bindings for colour and a few functions were created. They look as follows:
+- This method worked easily, however wrapping create_window immediately posed
+ issues as it takes a C++ string (not a primitive), and also returns something
+ other than a primitive. Methods involving manual allocation were investigated,
+ but instead the two binding implementations Embind and WebIDL Binder seemed
+ more promising.
+- Embind bindings for colour and a few functions were created. They look as
+ follows:
```
EMSCRIPTEN_BINDINGS(color) {
@@ -121,9 +133,10 @@ EMSCRIPTEN_BINDINGS(my_module) {
}
```
-- Unfortunately it seems Embind has issues with raw pointers, which SplashKit uses a lot of.
- Apparently it should only have issues with pointers to primitive types, but the same error was
- encountered even with structures (such as \_window_data\*).
+- Unfortunately it seems Embind has issues with raw pointers, which SplashKit
+ uses a lot of. Apparently it should only have issues with pointers to
+ primitive types, but the same error was encountered even with structures (such
+ as \_window_data\*).
- Finally WebIDL Binder was tried out.
- SplashKitWasm.idl was created and filled out with some simple prototypes.
- The C++ and Javascript glue was created by running
@@ -145,14 +158,15 @@ SK.fill_triangle(SK.color(1,0,0,1), 250, 300, 400, 150, 550, 300);
SK.refresh_screen()
```
-- It was here that testing was ended and this report was written up. The results can be seen in the
- final commit on the EmscriptenTest branch.
+- It was here that testing was ended and this report was written up. The results
+ can be seen in the final commit on the EmscriptenTest branch.
-This report took a bit longer to write up than it should have, as initially all the tests with
-Emscripten were performed using a personal compilation tool in order to make initial testing quick.
-That tool continued to be used to compile SplashKit-Core. Migrating to using SplashKit-Core's own
-compilation method took longer, but hopefully by doing so the results can be more easily reproduced
-and expanded on in the future.
+This report took a bit longer to write up than it should have, as initially all
+the tests with Emscripten were performed using a personal compilation tool in
+order to make initial testing quick. That tool continued to be used to compile
+SplashKit-Core. Migrating to using SplashKit-Core's own compilation method took
+longer, but hopefully by doing so the results can be more easily reproduced and
+expanded on in the future.
## What we found out
@@ -160,53 +174,63 @@ and expanded on in the future.
#### What worked
-During testing, the majority of SplashKit was compiled and linked successfully, and basic
-functionality (opening a window, drawing shapes) was confirmed to work. In the SDL test, SDL input
-was confirmed to work, making it likely it does in SplashKit as well.
+During testing, the majority of SplashKit was compiled and linked successfully,
+and basic functionality (opening a window, drawing shapes) was confirmed to
+work. In the SDL test, SDL input was confirmed to work, making it likely it does
+in SplashKit as well.
#### What wasn't tested
-Any functionality outside of that was not tested, including sound, animation, etc. Twitter,
-terminal, serial and JSON functionality was also not tested/replaced with stubs.
+Any functionality outside of that was not tested, including sound, animation,
+etc. Twitter, terminal, serial and JSON functionality was also not
+tested/replaced with stubs.
#### What didn't work
-Web functionality was replaced with stubs due to the usage of cURL which is not currently compilable
-under Emscripten. See (https://github.com/emscripten-core/emscripten/issues/3270)
+Web functionality was replaced with stubs due to the usage of cURL which is not
+currently compilable under Emscripten. See
+(https://github.com/emscripten-core/emscripten/issues/3270)
### SplashKit can be compiled as a library and used in Javascript.
-Embind seemed promising but due to issues with raw pointers WebIDL Binder was investigated further
-and is plausibly the better alternative for this project. It has issues with functions in global
-scope unfortunately (https://github.com/emscripten-core/emscripten/issues/8390), requiring the
+Embind seemed promising but due to issues with raw pointers WebIDL Binder was
+investigated further and is plausibly the better alternative for this project.
+It has issues with functions in global scope unfortunately
+(https://github.com/emscripten-core/emscripten/issues/8390), requiring the
majority of SplashKit's functions to be wrapped in a class.
### Emception was unable to be compiled.
-Until the bug here (https://github.com/jprendes/emception/issues/24) is fixed, it seems like it will
-be difficult to compile Emception without really digging into how it works and correcting the
-problem ourselves. Whether this is worth it or not is hard to say.
+Until the bug here (https://github.com/jprendes/emception/issues/24) is fixed,
+it seems like it will be difficult to compile Emception without really digging
+into how it works and correcting the problem ourselves. Whether this is worth it
+or not is hard to say.
## Open issues/risks
-As Emception was unable to be compiled, it is difficult to evaluate whether it would have been a
-good solution. There is risk that continuing to try and use it would just consume more time.
+As Emception was unable to be compiled, it is difficult to evaluate whether it
+would have been a good solution. There is risk that continuing to try and use it
+would just consume more time.
-Much of SplashKit is also yet to be tested; perhaps there are yet unknown issues regarding sound and
-other interactivity. Testing of larger codebases using SplashKit should be conducted.
+Much of SplashKit is also yet to be tested; perhaps there are yet unknown issues
+regarding sound and other interactivity. Testing of larger codebases using
+SplashKit should be conducted.
## Recommendation
-One way forward would be to continue developing SplashKit Online as a Javascript based scripting
-environment; it has been confirmed SplashKit can be used as a library via Javascript, and this
-ensures no load on the server regarding compiling, and also no uncertainty regarding whether it will
-be possible to get Emception working.
-
-Another way forward is to use Emscripten as a back-end compiler to the web IDE, similar to the
-original SplashKit Online repository. This introduces more complexity on the server side, but would
-allow users to develop using C++ just as they would on their own computer.
-
-Finally, it might be worth continuing to investigate Emception and try to get it to compile. Several
-unknowns exist - how long will it take to understand and make compile, and if it runs whether it be
-able to compile well enough (there are concerns regarding speed). If it is successful however this
-would probably give the best result, but there are many unknowns.
+One way forward would be to continue developing SplashKit Online as a Javascript
+based scripting environment; it has been confirmed SplashKit can be used as a
+library via Javascript, and this ensures no load on the server regarding
+compiling, and also no uncertainty regarding whether it will be possible to get
+Emception working.
+
+Another way forward is to use Emscripten as a back-end compiler to the web IDE,
+similar to the original SplashKit Online repository. This introduces more
+complexity on the server side, but would allow users to develop using C++ just
+as they would on their own computer.
+
+Finally, it might be worth continuing to investigate Emception and try to get it
+to compile. Several unknowns exist - how long will it take to understand and
+make compile, and if it runs whether it be able to compile well enough (there
+are concerns regarding speed). If it is successful however this would probably
+give the best result, but there are many unknowns.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Research and Findings/splashkit-online-research-plan.md b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Research and Findings/splashkit-online-research-plan.md
index fc5aa04a2..650bbed4d 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Research and Findings/splashkit-online-research-plan.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/Research and Findings/splashkit-online-research-plan.md
@@ -6,25 +6,27 @@ title: SplashKit Online Research Spike Plan
## Context
-It would be useful if SplashKit could be used directly in-browser, in order to make it easier for
-people to get started without difficulties setting it up locally on their own machine. Last
-trimester the SplashKit Online project was started, however due to its difficulty was placed on
-hold.
-
-The purpose of this spike is to investigate technologies that may make running it online more
-viable, and explore which ways seem most promising. The main technology to be investigated here is
-WebAssembly (or WASM), which was mentioned in the readme for the SplashKit-Online repository
-(https://github.com/thoth-tech/SplashkitOnline). The technology itself doesn't appear to have been
-used in the project; instead it seems that project relied on both compiling _and_ executing the code
-on a back-end server. This could be considered an extension of that initial research.
+It would be useful if SplashKit could be used directly in-browser, in order to
+make it easier for people to get started without difficulties setting it up
+locally on their own machine. Last trimester the SplashKit Online project was
+started, however due to its difficulty was placed on hold.
+
+The purpose of this spike is to investigate technologies that may make running
+it online more viable, and explore which ways seem most promising. The main
+technology to be investigated here is WebAssembly (or WASM), which was mentioned
+in the readme for the SplashKit-Online repository
+(https://github.com/thoth-tech/SplashkitOnline). The technology itself doesn't
+appear to have been used in the project; instead it seems that project relied on
+both compiling _and_ executing the code on a back-end server. This could be
+considered an extension of that initial research.
**Knowledge Gap:**
-- It is currently unknown how well certain technologies like WASM could be used to compile/run code
- using SplashKit in a browser.
+- It is currently unknown how well certain technologies like WASM could be used
+ to compile/run code using SplashKit in a browser.
- It is unknown if code can be compiled quick enough within the browser.
-- It is unknown if and how effectively SplashKit can be compiled as a library to be used within the
- browser.
+- It is unknown if and how effectively SplashKit can be compiled as a library to
+ be used within the browser.
**Skill Gap:**
@@ -34,17 +36,20 @@ on a back-end server. This could be considered an extension of that initial rese
- Ability to compile SplashKit to WASM.
-It is unsure whether projects like Emscripten or Emception are able to compile SplashKit and run the
-result in a browser interactively; this will need to be investigated. It is also uncertain whether
-it would be better to compile within the browser itself, or on a back-end server.
+It is unsure whether projects like Emscripten or Emception are able to compile
+SplashKit and run the result in a browser interactively; this will need to be
+investigated. It is also uncertain whether it would be better to compile within
+the browser itself, or on a back-end server.
## Goals/Deliverables
- Report on possible ways to continue developing SplashKit Online
- - Confirm whether code using SplashKit can be compiled with Emscripten (C/C++ to WASM compiler)
- and executed in a browser
- - Confirm whether SplashKit can be compiled and used as a library via Javascript in a browser
- - Confirm SplashKit code can be compiled in-browser using Emception (self hosted Emscripten)
+ - Confirm whether code using SplashKit can be compiled with Emscripten (C/C++
+ to WASM compiler) and executed in a browser
+ - Confirm whether SplashKit can be compiled and used as a library via
+ Javascript in a browser
+ - Confirm SplashKit code can be compiled in-browser using Emception (self
+ hosted Emscripten)
**Planned start date:** Week 1 T3 2023
@@ -53,16 +58,18 @@ it would be better to compile within the browser itself, or on a back-end server
## Planning notes
- Setup Emscripten
-- Confirm code using SDL can be compiled with Emscripten and executed in a browser
+- Confirm code using SDL can be compiled with Emscripten and executed in a
+ browser
- Setup simple SDL example
- Compile with Emscripten
- Run in browser and check result
-- Confirm code using SplashKit can be compiled with Emscripten and executed in a browser
+- Confirm code using SplashKit can be compiled with Emscripten and executed in a
+ browser
- Setup simple SplashKit example
- Compile with Emscripten
- Run in browser and check result
-- (Optional) Test whether SplashKit can be compiled and used as a library via Javascript in a
- browser
+- (Optional) Test whether SplashKit can be compiled and used as a library via
+ Javascript in a browser
- Investigate methods of binding
- Test binding methods
- Build and setup Emception
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/index.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/index.mdx
index c0c92e4eb..b2877a3a0 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/index.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Online/index.mdx
@@ -10,20 +10,22 @@ import { CardGrid, LinkCard } from "@astrojs/starlight/components";
## The SplashKit Online Team
The SplashKit Online Team manages all aspects of the
-[SplashKit Online IDE](https://thoth-tech.github.io/splashkit-online/) website, a web-based IDE
-designed to help beginner programmers quickly start building 2D games directly in the browser.
+[SplashKit Online IDE](https://thoth-tech.github.io/splashkit-online/) website,
+a web-based IDE designed to help beginner programmers quickly start building 2D
+games directly in the browser.
-It currently supports JavaScript (with experimental C++ functionality) and leverages WebAssembly
-(Wasm) to execute SplashKit code, but the goal is to expand this support to include all languages
-that SplashKit supports: C++, C#, Python, and Pascal.
+It currently supports JavaScript (with experimental C++ functionality) and
+leverages WebAssembly (Wasm) to execute SplashKit code, but the goal is to
+expand this support to include all languages that SplashKit supports: C++, C#,
+Python, and Pascal.
The team’s responsibilities include:
- Ensuring consistent styling and branding across the site
-- Improving language support by integrating additional language support. (Currently supporting
- Javascript and (experimental) C++.)
-- Optimising the site’s usability and accessibility for a smooth user experience by improving the
- user experience, performance and language-specific features.
+- Improving language support by integrating additional language support.
+ (Currently supporting Javascript and (experimental) C++.)
+- Optimising the site’s usability and accessibility for a smooth user experience
+ by improving the user experience, performance and language-specific features.
## Onboarding Information
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/01-Tutorial-Proposal-Template.md b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/01-Tutorial-Proposal-Template.md
index aa9f1a9f5..cfbfa4d6f 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/01-Tutorial-Proposal-Template.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/01-Tutorial-Proposal-Template.md
@@ -1,22 +1,23 @@
---
title: Tutorial Proposal Template
-description: Use this template to create a proposal for a new SplashKit tutorial.
+description:
+ Use this template to create a proposal for a new SplashKit tutorial.
sidebar:
label: Tutorial Proposal Template
---
## Introduction
-Provide a brief introduction to the tutorial, explaining what the tutorial will cover and what the
-reader will learn from it.
+Provide a brief introduction to the tutorial, explaining what the tutorial will
+cover and what the reader will learn from it.
## Tutorial Details
### Tutorial Structure
-Explain the basic structure you plan to use for the tutorial. (Whole code first then explain
-snippets? Or introduce code "as-you-go" style? Background information at the top or throughout the
-tutorial? etc.)
+Explain the basic structure you plan to use for the tutorial. (Whole code first
+then explain snippets? Or introduce code "as-you-go" style? Background
+information at the top or throughout the tutorial? etc.)
### Level of Difficulty
@@ -34,6 +35,6 @@ List the main SplashKit functions that will be included in the tutorial
## Conclusion
-Summarise the importance of the tutorial and how it will benefit the readers. Reiterate the main
-points covered in the tutorial and explain how readers can apply this new knowledge in their own
-projects.
+Summarise the importance of the tutorial and how it will benefit the readers.
+Reiterate the main points covered in the tutorial and explain how readers can
+apply this new knowledge in their own projects.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/02-Tutorial-Style-Guide.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/02-Tutorial-Style-Guide.mdx
index fa977ccd5..c72b42d30 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/02-Tutorial-Style-Guide.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/02-Tutorial-Style-Guide.mdx
@@ -1,6 +1,7 @@
---
title: Tutorial Style Guide
-description: Use this guide to ensure your SplashKit tutorial is formatted correctly.
+description:
+ Use this guide to ensure your SplashKit tutorial is formatted correctly.
sidebar:
label: Tutorial Style Guide
---
@@ -31,8 +32,8 @@ Always keep a blank line between headings and other text.
### Subheadings
-For subheadings that you want to see in the RHS sidebar/panel on splashkit.io, use `### Subheading`.
-Subheadings lower than this will not be shown.
+For subheadings that you want to see in the RHS sidebar/panel on splashkit.io,
+use `### Subheading`. Subheadings lower than this will not be shown.
Do not use bolded lines for headings/subheadings.
@@ -44,8 +45,8 @@ Do not use any "raw" links. Links must always use the following format:
[Text to show for link](URL link)
```
-Make sure to link your headings within the same file if these are mentioned, using the following
-format:
+Make sure to link your headings within the same file if these are mentioned,
+using the following format:
```mdx
[Heading text](#link-to-heading)
@@ -61,12 +62,14 @@ Use the following format for all images:

```
-For the **Alt text** part above: Briefly explain what the image is showing. This is important for
-accessibility.
+For the **Alt text** part above: Briefly explain what the image is showing. This
+is important for accessibility.
-For the **link to image** part above: If you are linking an image resource that you have downloaded,
-this will need to be put into an **images** folder in the same place as the tutorial.
-For example, with the **skbox.png** image in the images folder here, you would use:
+For the **link to image** part above: If you are linking an image resource that
+you have downloaded, this will need to be put into an **images** folder in the
+same place as the tutorial.
+For example, with the **skbox.png** image in the images folder here, you would
+use:
```mdx

@@ -82,8 +85,8 @@ Always keep a blank line between lists and other text.
## Code Blocks
-Use fenced code blocks for any code snippets or terminal commands, to make it easier for the reader
-to copy.
+Use fenced code blocks for any code snippets or terminal commands, to make it
+easier for the reader to copy.
Always include a language with the fenced code blocks.
@@ -167,11 +170,11 @@ delay(5000)
close_all_windows()
```
-For any blocks that are not code, you can use `plaintext` for the language. For terminal commands,
-use `shell` for the language.
+For any blocks that are not code, you can use `plaintext` for the language. For
+terminal commands, use `shell` for the language.
-If your guide is only using 1 language (and not using at least both C# and C++), make sure to
-include the language used in the title.
+If your guide is only using 1 language (and not using at least both C# and C++),
+make sure to include the language used in the title.
### Multiple Code Languages
@@ -386,12 +389,13 @@ write_line(name + "'s quest is: " + quest)
## Callouts (Asides)
-Use callouts (also known as [Asides](https://starlight.astro.build/guides/components/#asides)) to
-highlight tips or important notes.
+Use callouts (also known as
+[Asides](https://starlight.astro.build/guides/components/#asides)) to highlight
+tips or important notes.
:::tip[Make the page interesting]
-These can help to direct the reader to any extra info, and help to add more colour to your tutorial
-guide.
+These can help to direct the reader to any extra info, and help to add more
+colour to your tutorial guide.
:::
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/04-adding-oop.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/04-adding-oop.mdx
index a78a2b47a..c08e5f523 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/04-adding-oop.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/04-adding-oop.mdx
@@ -1,6 +1,7 @@
---
title: Guide to adding OOP to SplashKit tutorials
-description: Learn how to add object-oriented programming (OOP) to the SplashKit tutorials.
+description:
+ Learn how to add object-oriented programming (OOP) to the SplashKit tutorials.
sidebar:
hidden: true
label: Adding OOP to Guides
@@ -10,14 +11,15 @@ import { Tabs, TabItem } from "@astrojs/starlight/components";
## Adding OOP and Top Level C# to Splashkit Tutorials
-One of the current goals of the SplashKit team is to enhance the tutorials by including both
-top-level statement and object-oriented (OOP) versions of C# programs. This guide outlines how to
-effectively integrate OOP into the existing tutorials, providing both options for users to choose
-from.
+One of the current goals of the SplashKit team is to enhance the tutorials by
+including both top-level statement and object-oriented (OOP) versions of C#
+programs. This guide outlines how to effectively integrate OOP into the existing
+tutorials, providing both options for users to choose from.
### The Full Code Block Structure
-The full code block structure for C++, C# in top level and OOP, and Python is as follows:
+The full code block structure for C++, C# in top level and OOP, and Python is as
+follows:
````md
@@ -64,16 +66,17 @@ Add Python code here
````
-This is the new standard structure for all Splashkit tutorials. The C# code block has been replaced
-with a tabs component that contains two tabs, one for top-level statements and one for
-object-oriented programming. The C# code block has been replaced with a tabs component that contains
-two tabs, one for top-level statements and one for object-oriented programming.
+This is the new standard structure for all Splashkit tutorials. The C# code
+block has been replaced with a tabs component that contains two tabs, one for
+top-level statements and one for object-oriented programming. The C# code block
+has been replaced with a tabs component that contains two tabs, one for
+top-level statements and one for object-oriented programming.
### Adding OOP to the Splashkit tutorials
-If you are adding OOP to the Splashkit tutorials, you will need to replace the C# section with the
-following code block in order to have both top-level statements and object-oriented programming
-options:
+If you are adding OOP to the Splashkit tutorials, you will need to replace the
+C# section with the following code block in order to have both top-level
+statements and object-oriented programming options:
````md
@@ -100,9 +103,9 @@ Add OOP version of C# code here
## Example
-Once done, the view of the code blocks will remain the same on the Splashkit site. However, once
-clicking on the C# tab, the user will be able to see both the top-level statements and
-object-oriented programming versions of the code.
+Once done, the view of the code blocks will remain the same on the Splashkit
+site. However, once clicking on the C# tab, the user will be able to see both
+the top-level statements and object-oriented programming versions of the code.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/04-oop-styling.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/04-oop-styling.mdx
index 78761569b..091fa414d 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/04-oop-styling.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/04-oop-styling.mdx
@@ -1,6 +1,7 @@
---
title: Converting Between Top-Level Statements and OOP Styling in C#
-description: Learn how to add object-oriented programming (OOP) to the SplashKit tutorials.
+description:
+ Learn how to add object-oriented programming (OOP) to the SplashKit tutorials.
sidebar:
label: C# OOP Styling
hidden: true
@@ -9,9 +10,9 @@ sidebar:
import { Tabs, TabItem } from "@astrojs/starlight/components";
import { Steps } from "@astrojs/starlight/components";
-This guide explains how to convert between top-level statements and object-oriented programming
-(OOP) styles in C# for SplashKit tutorials. This guide is good to refer to when you need to convert
-code between the two styles.
+This guide explains how to convert between top-level statements and
+object-oriented programming (OOP) styles in C# for SplashKit tutorials. This
+guide is good to refer to when you need to convert code between the two styles.
## C# Tabs
@@ -19,16 +20,16 @@ code between the two styles.
**Top-Level Statements**:
-- Top-level statements allow you to write C# code without explicitly defining a class or `Main`
- method.
-- Uses the directive: `using static SplashKitSDK.SplashKit;`, so SplashKit functions are called
- directly, such as `WriteLine("Hello!");`.
+- Top-level statements allow you to write C# code without explicitly defining a
+ class or `Main` method.
+- Uses the directive: `using static SplashKitSDK.SplashKit;`, so SplashKit
+ functions are called directly, such as `WriteLine("Hello!");`.
**Object-Oriented Programming (OOP)**:
- OOP-style C# code requires defining a `Main` method inside a class.
-- Uses `using SplashKitSDK;`, meaning all SplashKit commands are prefixed with `SplashKit.` (e.g.,
- `SplashKit.WriteLine("Hello!");`).
+- Uses `using SplashKitSDK;`, meaning all SplashKit commands are prefixed with
+ `SplashKit.` (e.g., `SplashKit.WriteLine("Hello!");`).
### Converting from Top-Level Statements to OOP
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/05-basic-vectors-proposal.md b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/05-basic-vectors-proposal.md
index c1d981ed0..cc1fdd207 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/05-basic-vectors-proposal.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/05-basic-vectors-proposal.md
@@ -1,8 +1,8 @@
---
title: Tutorial Proposal Guide
description:
- Structure of a tutorial proposal on basic vector mathematics and its applications in game
- development.
+ Structure of a tutorial proposal on basic vector mathematics and its
+ applications in game development.
sidebar:
hidden: true
label: Tutorial Proposal Guide
@@ -10,29 +10,32 @@ sidebar:
## Tutorial Template 1: Basic Vectors
-This tutorial will cover the fundamentals of vector mathematics and some applications (such as in
-game development). The intended audience will be for beginners to intermediate learners, and will
-guide them through understanding and implementing vector operations, including angle and magnitude
-calculations, vector addition and subtraction, and the dot product. It will also cover other
-techniques like vector normals and basic reflection, all within the context of creating dynamic game
-mechanics. By the end, it will lead to a greater understanding of using vectors to enhance program
-functionality and interactivity.
+This tutorial will cover the fundamentals of vector mathematics and some
+applications (such as in game development). The intended audience will be for
+beginners to intermediate learners, and will guide them through understanding
+and implementing vector operations, including angle and magnitude calculations,
+vector addition and subtraction, and the dot product. It will also cover other
+techniques like vector normals and basic reflection, all within the context of
+creating dynamic game mechanics. By the end, it will lead to a greater
+understanding of using vectors to enhance program functionality and
+interactivity.
### Tutorial Details
#### Tutorial Structure
-This tutorial will follow an "as-you-go" approach, introducing concepts and code snippets
-progressively. Each section will focus on a specific vector operation, providing both theoretical
-background and practical implementation. Visual aids and code examples will be used throughout to
-help illustrate the concepts. It will also contain the code for the learner to run themselves and
-adjust the values as they please to see the effects on the program.
+This tutorial will follow an "as-you-go" approach, introducing concepts and code
+snippets progressively. Each section will focus on a specific vector operation,
+providing both theoretical background and practical implementation. Visual aids
+and code examples will be used throughout to help illustrate the concepts. It
+will also contain the code for the learner to run themselves and adjust the
+values as they please to see the effects on the program.
#### Level of Difficulty
-This tutorial is geared towards beginners and intermediate learners with basic programming
-knowledge. It's ideal for those looking to expand their understanding of game mechanics through
-vector mathematics.
+This tutorial is geared towards beginners and intermediate learners with basic
+programming knowledge. It's ideal for those looking to expand their
+understanding of game mechanics through vector mathematics.
#### Functions Covered
@@ -49,36 +52,39 @@ The tutorial will cover the following main SplashKit functions:
### Conclusion
-This tutorial provides a comprehensive introduction to vectors and their applications in game
-development. By understanding and utilising vector operations, readers can significantly enhance the
-realism and responsiveness of their games. Whether creating simple games or increasing the
-interactivity of their existing games, the skills gained from this tutorial will add complexity to
-their programs.
+This tutorial provides a comprehensive introduction to vectors and their
+applications in game development. By understanding and utilising vector
+operations, readers can significantly enhance the realism and responsiveness of
+their games. Whether creating simple games or increasing the interactivity of
+their existing games, the skills gained from this tutorial will add complexity
+to their programs.
## Tutorial Template 2: Camera Controls
### Introduction
-This tutorial will cover how to use the various camera functions in SplashKit to implement dynamic
-camera movements in graphical applications. By following this tutorial, readers will learn to create
-zoom effects, responsive camera controls, and movements enhancing the interactivity and immersion of
-their games or other projects.
+This tutorial will cover how to use the various camera functions in SplashKit to
+implement dynamic camera movements in graphical applications. By following this
+tutorial, readers will learn to create zoom effects, responsive camera controls,
+and movements enhancing the interactivity and immersion of their games or other
+projects.
### Tutorial Details
#### Tutorial Structure
-This tutorial will follow an "as-you-go" style. Each section will introduce a specific concept and
-demonstrate its practical application with step-by-step code examples. The tutorial will start with
-a basic overview of the camera functions, then guide readers through progressively more complex
-examples, allowing them to apply what they’ve learned immediately. Visual aids will be used to
-illustrate camera movements and effects in real-time.
+This tutorial will follow an "as-you-go" style. Each section will introduce a
+specific concept and demonstrate its practical application with step-by-step
+code examples. The tutorial will start with a basic overview of the camera
+functions, then guide readers through progressively more complex examples,
+allowing them to apply what they’ve learned immediately. Visual aids will be
+used to illustrate camera movements and effects in real-time.
#### Level of Difficulty
-This tutorial is targeted at intermediate learners who have experience in programming with the
-SplashKit library. It is ideal for developers who want to add dynamic camera control to their
-graphical applications or games.
+This tutorial is targeted at intermediate learners who have experience in
+programming with the SplashKit library. It is ideal for developers who want to
+add dynamic camera control to their graphical applications or games.
#### Functions Covered
@@ -93,7 +99,8 @@ The tutorial will cover the following main SplashKit functions:
### Conclusion
-This tutorial will help readers understand how to manipulate the camera in their SplashKit
-applications. By mastering these camera functions, developers can create more engaging and
-interactive experiences, such as zoom effects and camera movements. This knowledge will be valuable
-for anyone looking to improve the visual dynamics and responsiveness of their game projects.
+This tutorial will help readers understand how to manipulate the camera in their
+SplashKit applications. By mastering these camera functions, developers can
+create more engaging and interactive experiences, such as zoom effects and
+camera movements. This knowledge will be valuable for anyone looking to improve
+the visual dynamics and responsiveness of their game projects.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/06-SplashKitTutorials.md b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/06-SplashKitTutorials.md
index e6df651e4..d666181d4 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/06-SplashKitTutorials.md
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/06-SplashKitTutorials.md
@@ -1,16 +1,18 @@
---
title: Tutorials List
-description: A compilation of SplashKit tutorials with recommendations for improvement.
+description:
+ A compilation of SplashKit tutorials with recommendations for improvement.
sidebar:
hidden: true
---
-On this page is a compilation of SplashKit tutorials and tutorial proposals. Areas of potential
-improvement have been marked.
+On this page is a compilation of SplashKit tutorials and tutorial proposals.
+Areas of potential improvement have been marked.
-The tutorials have been layed out in categories that seem reasonable for further development. There
-is a need for both tutorials that focus on specific areas (such as sprites, or audio), along with
-tutorials that bring these concepts together cohesively (like the Metroidvania series).
+The tutorials have been layed out in categories that seem reasonable for further
+development. There is a need for both tutorials that focus on specific areas
+(such as sprites, or audio), along with tutorials that bring these concepts
+together cohesively (like the Metroidvania series).
## Current Tutorials
@@ -34,14 +36,15 @@ tutorials that bring these concepts together cohesively (like the Metroidvania s
- Repo Links:
[_splashkit.io_](https://github.com/splashkit/splashkit.io/tree/develop/source/articles/guides/2018-05-30-get-started-drawing.html.md.erb)
[_splashkit.io-starlight_](https://github.com/thoth-tech/splashkit.io-starlight/blob/master/src/content/docs/guides/Starter/get-started-drawing.mdx)
-- Website Links: [_Live_](https://splashkit.io/guides/starter/get-started-drawing/)
+- Website Links:
+ [_Live_](https://splashkit.io/guides/starter/get-started-drawing/)
#### _Understanding Double Buffering_
- Overview: An explanation of double buffering.
- Status: Needs Improvement/Checking
- - Explaining that without double buffering, the in-between states while drawing could end up
- visible to the user would be good.
+ - Explaining that without double buffering, the in-between states while
+ drawing could end up visible to the user would be good.
- Repo Links:
[_splashkit.io_](https://github.com/splashkit/splashkit.io/tree/develop/source/articles/guides/2018-05-30-basic-drawing.html.md.erb)
[_splashkit.io-starlight_](https://github.com/thoth-tech/splashkit.io-starlight/blob/master/src/content/docs/guides/Starter/double-buffering.mdx)
@@ -72,7 +75,8 @@ tutorials that bring these concepts together cohesively (like the Metroidvania s
- Repo Links:
[_documentation_](https://github.com/thoth-tech/documentation/blob/main/docs/Splashkit/Applications/Tutorials%20and%20Research/Tutorial%20Proposals/Tutorial%20Markdowns/Getting%20Started%20With%20SplashKit%20-%20C%23-C%2B%2B/Getting%20Started%20With%20Splashkit%20-%20C%23-C%2B%2B.md)
[_splashkit.io-starlight_](https://github.com/thoth-tech/splashkit.io-starlight/blob/master/src/content/docs/guides/Starter/Getting%20Started%20With%20Splashkit.md)
-- Website Links: [_Live_](https://splashkit.io/guides/starter/getting-started-with-splashkit/)
+- Website Links:
+ [_Live_](https://splashkit.io/guides/starter/getting-started-with-splashkit/)
---
@@ -114,32 +118,35 @@ tutorials that bring these concepts together cohesively (like the Metroidvania s
#### _Sprite Layering tutorial C++_
-- Overview: Explanation of what sprite layering is with code and video of result.
+- Overview: Explanation of what sprite layering is with code and video of
+ result.
- Status: Completed
- Repo Links:
[_SplashKit-Tutorial_]()
[_splashkit.io-starlight_](https://github.com/thoth-tech/splashkit.io-starlight/blob/master/src/content/docs/guides/Sprites/Sprite%20Layering%20Tutorial.md)
-- Website Links: [_Live_](https://splashkit.io/guides/sprites/sprite-layering-tutorial/)
+- Website Links:
+ [_Live_](https://splashkit.io/guides/sprites/sprite-layering-tutorial/)
#### _Getting Started With Sprites in Splashkit - C#_
- Overview: Explanation of what sprites are in SplashKit.
- Status: Needs Improvement/Checking
- - Is quite good! One thing that might make it better is to just improve the explanation of what a
- sprite is, since a sprite isn't really a bitmap, it's closer to an instantiation of a bitmap (as
- mentioned later on in it).
+ - Is quite good! One thing that might make it better is to just improve the
+ explanation of what a sprite is, since a sprite isn't really a bitmap, it's
+ closer to an instantiation of a bitmap (as mentioned later on in it).
- Repo Links:
[_SplashKit-Tutorial_](https://github.com/thoth-tech/SplashKit-Tutorial/blob/main/Tutorials/Tutorial%20Markdowns/Getting%20Started%20With%20Sprites%20in%20Splashkit%20Tutorial%20-%20C%23/Getting%20Started%20With%20Sprites%20in%20Splashkit%20Tutorial%20-%20CSharp.md)
[_splashkit.io-starlight_](https://github.com/thoth-tech/splashkit.io-starlight/blob/master/src/content/docs/guides/Sprites/Getting%20Started%20With%20Sprites%20csharp.md)
- Proposal Repo Links:
[_SplashKit-Tutorial_](https://github.com/thoth-tech/SplashKit-Tutorial/blob/main/Tutorial%20Proposals/Getting%20Started%20With%20Sprites%20in%20Splashkit%20Outline%20-%20C%23.md)
[_documentation_](https://github.com/thoth-tech/documentation/blob/main/docs/Splashkit/Applications/Tutorials%20and%20Research/Tutorial%20Proposals/Getting%20Started%20With%20Sprites%20in%20Splashkit%20Outline%20-%20C%23.md)
-- Website Links: [_Live_](https://splashkit.io/guides/sprites/getting-started-with-sprites-csharp/)
+- Website Links:
+ [_Live_](https://splashkit.io/guides/sprites/getting-started-with-sprites-csharp/)
#### _Getting Started With Sprite layering in Splashkit - C#_
-- Overview: Explanation of what sprite layering is with code and video of result. Slightly more
- technical than 'Sprite Layering tutorial C++'.
+- Overview: Explanation of what sprite layering is with code and video of
+ result. Slightly more technical than 'Sprite Layering tutorial C++'.
- Status: Completed
- Repo Links:
[_documentation_](https://github.com/thoth-tech/documentation/blob/main/docs/Splashkit/Applications/Tutorials%20and%20Research/Tutorial%20Proposals/Tutorial%20Markdowns/Getting%20Started%20With%20Sprite%20Layering%20in%20Splashkit%20Tutorial%20-%20C%23/Sprite%20layering%20in%20Splashkit%20Tutorial%20-%20C%23.md)
@@ -176,12 +183,13 @@ tutorials that bring these concepts together cohesively (like the Metroidvania s
- Overview: A guide on playing sound effects and music.
- Status: Needs improvement
- - A bit bare-bones. See the proposal 'Introduction to Splashkit Audio and Music Functions' for
- perhaps a good replacement.
+ - A bit bare-bones. See the proposal 'Introduction to Splashkit Audio and
+ Music Functions' for perhaps a good replacement.
- Repo Links:
[_splashkit.io_](https://github.com/splashkit/splashkit.io/tree/develop/source/articles/guides/2018-06-10-about-audio.html.md.erb)
[_splashkit.io-starlight_](https://github.com/thoth-tech/splashkit.io-starlight/blob/master/src/content/docs/guides/Audio/GettingStartedAudio.mdx)
-- Website Links: [_Live_](https://splashkit.io/guides/audio/gettingstartedaudio/)
+- Website Links:
+ [_Live_](https://splashkit.io/guides/audio/gettingstartedaudio/)
---
@@ -194,12 +202,13 @@ tutorials that bring these concepts together cohesively (like the Metroidvania s
- Repo Links:
[_splashkit.io_](https://github.com/splashkit/splashkit.io/tree/develop/source/articles/guides/2018-05-29-animation.html.md.erb)
[_splashkit.io-starlight_](https://github.com/thoth-tech/splashkit.io-starlight/blob/master/src/content/docs/guides/Animations/Using%20Animation.mdx)
-- Website Links: [_Live_](https://splashkit.io/guides/animations/using-animation/)
+- Website Links:
+ [_Live_](https://splashkit.io/guides/animations/using-animation/)
#### _Sprite Animation_
-- Overview: Builds upon the "Using Animations", cover similar functionality (animations, movement,
- etc)
+- Overview: Builds upon the "Using Animations", cover similar functionality
+ (animations, movement, etc)
- Status: Completed
- Repo Links:
[_SplashKit-Tutorial_](https://github.com/thoth-tech/SplashKit-Tutorial/blob/main/Tutorials/Sprite%20Animation%20Tutorial/Sprite%20Animation%20Tutorial.md)
@@ -210,9 +219,9 @@ tutorials that bring these concepts together cohesively (like the Metroidvania s
:::note[Thoughts]
-While these tutorials are quite good, they feel very disconnected from the rest of SplashKit.
-Wouldn't creating a (very simple) multiplayer game have been a better subject? I don't think anyone
-is going to make a website using SplashKit.
+While these tutorials are quite good, they feel very disconnected from the rest
+of SplashKit. Wouldn't creating a (very simple) multiplayer game have been a
+better subject? I don't think anyone is going to make a website using SplashKit.
:::
@@ -223,16 +232,19 @@ is going to make a website using SplashKit.
- Repo Links:
[_splashkit.io_](https://github.com/splashkit/splashkit.io/tree/develop/source/articles/guides/2018-07-14-getting-started-with-servers.html.md.erb)
[_splashkit.io-starlight_](https://github.com/thoth-tech/splashkit.io-starlight/blob/master/src/content/docs/guides/Networking/getting-started-with-servers.mdx)
-- Website Links: [_Live_](https://splashkit.io/guides/networking/getting-started-with-servers/)
+- Website Links:
+ [_Live_](https://splashkit.io/guides/networking/getting-started-with-servers/)
#### _Routing With Servers_
-- Overview: A continuation of the previous tutorial, serving different pages to different routes.
+- Overview: A continuation of the previous tutorial, serving different pages to
+ different routes.
- Status: Completed
- Repo Links:
[_splashkit.io_](https://github.com/splashkit/splashkit.io/tree/develop/source/articles/guides/2018-08-10-routing-with-servers.html.md.erb)
[_splashkit.io-starlight_](https://github.com/thoth-tech/splashkit.io-starlight/blob/master/src/content/docs/guides/Networking/routing-with-servers.mdx)
-- Website Links: [_Live_](https://splashkit.io/guides/networking/routing-with-servers/)
+- Website Links:
+ [_Live_](https://splashkit.io/guides/networking/routing-with-servers/)
#### _How to make a RESTful API call using Splashkit_
@@ -241,7 +253,8 @@ is going to make a website using SplashKit.
- Repo Links:
[_splashkit.io_](https://github.com/splashkit/splashkit.io/tree/develop/source/articles/guides/2018-10-03-restful-api-call.html.md.erb)
[_splashkit.io-starlight_](https://github.com/thoth-tech/splashkit.io-starlight/blob/master/src/content/docs/guides/Networking/index.mdx)
-- Website Links: [_Live_](https://splashkit.io/guides/networking/restful-api-call/)
+- Website Links:
+ [_Live_](https://splashkit.io/guides/networking/restful-api-call/)
---
@@ -251,9 +264,10 @@ is going to make a website using SplashKit.
- Overview: An introduction to interacting with databases via SplashKit.
- Status: Needs Improvement/Checking
- - The tutorial is well written and engaging. Only thing missing perhaps is a better explanation of
- what a database is, since SplashKit is targetted at beginnners who may not know what they are. A
- visual example with tables might be good?
+ - The tutorial is well written and engaging. Only thing missing perhaps is a
+ better explanation of what a database is, since SplashKit is targetted at
+ beginnners who may not know what they are. A visual example with tables
+ might be good?
- Repo Links:
[_splashkit.io_](https://github.com/splashkit/splashkit.io/tree/develop/source/articles/guides/2017-10-03-using-databases.html.md)
[_splashkit.io-starlight_](https://github.com/thoth-tech/splashkit.io-starlight/blob/master/src/content/docs/guides/Database/using-databases.mdx)
@@ -261,7 +275,8 @@ is going to make a website using SplashKit.
#### _Getting Started With SplashKit Database_
-- Overview: A much more in-depth tutorial on the database functions (closer to an API reference).
+- Overview: A much more in-depth tutorial on the database functions (closer to
+ an API reference).
- Status: Completed
- Repo Links:
[_documentation_](https://github.com/thoth-tech/documentation/blob/main/docs/Splashkit/Applications/Tutorials%20and%20Research/Tutorial%20Proposals/Tutorial%20Markdowns/Getting%20Started%20With%20SplashKit%20Database.md)
@@ -276,8 +291,9 @@ is going to make a website using SplashKit.
- Overview: A short explanation of JSON with a code example.
- Status: Needs improvement
- - Doesn't provide much explanation of JSON nor why one would want to use it. The code example
- clearly demonstrates writing, but is missing reading that data back in.
+ - Doesn't provide much explanation of JSON nor why one would want to use it.
+ The code example clearly demonstrates writing, but is missing reading that
+ data back in.
- Repo Links:
[_splashkit.io_](https://github.com/splashkit/splashkit.io/tree/develop/source/articles/guides/2017-10-03-using-json.html.md)
[_splashkit.io-starlight_](https://github.com/thoth-tech/splashkit.io-starlight/blob/master/src/content/docs/guides/JSON/using-json.mdx)
@@ -294,7 +310,8 @@ is going to make a website using SplashKit.
- Repo Links:
[_splashkit.io_](https://github.com/splashkit/splashkit.io/tree/develop/source/articles/guides/2017-10-03-useful-utilities.html.md)
[_splashkit.io-starlight_](https://github.com/thoth-tech/splashkit.io-starlight/blob/master/src/content/docs/guides/Utilities/useful-utilities.md)
-- Website Links: [_Live_](https://splashkit.io/guides/utilities/useful-utilities/)
+- Website Links:
+ [_Live_](https://splashkit.io/guides/utilities/useful-utilities/)
---
@@ -302,21 +319,23 @@ is going to make a website using SplashKit.
:::note[Thoughts]
-While these tutorials are quite decent, they suffer from what seems to be a big problem. There isn't
-actually a completed project that the tutorials are leading up to, and since some of the parts are
-written by different people, there is a lack of continuity. Something introduced by one tutorial (as
-an example: creating a floor using sprites), is then forgotten about in a future tutorial (which
-then opts to quickly add a ground by drawing rectangles).
+While these tutorials are quite decent, they suffer from what seems to be a big
+problem. There isn't actually a completed project that the tutorials are leading
+up to, and since some of the parts are written by different people, there is a
+lack of continuity. Something introduced by one tutorial (as an example:
+creating a floor using sprites), is then forgotten about in a future tutorial
+(which then opts to quickly add a ground by drawing rectangles).
-It would be good if there was a 'source-of-truth' codebase for the finished game, that can then be
-used as a base to keep the tutorials cohesive even when written by different people.
+It would be good if there was a 'source-of-truth' codebase for the finished
+game, that can then be used as a base to keep the tutorials cohesive even when
+written by different people.
:::
#### _Creating a 2D Metroidvania Game (1, 2, 5, 6, 12)_
-- Overview: A series on producing a Metroidvania game, have only skim read. Seems quite thorough and
- well written.
+- Overview: A series on producing a Metroidvania game, have only skim read.
+ Seems quite thorough and well written.
- Status: Completed
- Repo Links:
[_SplashKit-Tutorial_](https://github.com/thoth-tech/SplashKit-Tutorial/blob/main/Tutorials/Creating%20a%202D%20Metroidvania%20Game%20Using%20Splashkit/)
@@ -325,10 +344,12 @@ used as a base to keep the tutorials cohesive even when written by different peo
- Overview: See above.
-**Note:** There are **two** 'Part 2's, with somewhat overlapping content. Someone will need to
-choose which one to use in the final series, or merge bits of them together. While this tutorial
-suggests it comes after the _other_ part 2, I would recommend putting this one first, since this one
-covers project creation with `skm new`, and doesn't go into as much detail about drawing graphics.
+**Note:** There are **two** 'Part 2's, with somewhat overlapping content.
+Someone will need to choose which one to use in the final series, or merge bits
+of them together. While this tutorial suggests it comes after the _other_ part
+2, I would recommend putting this one first, since this one covers project
+creation with `skm new`, and doesn't go into as much detail about drawing
+graphics.
- Status: Completed
- Repo Links:
@@ -388,16 +409,17 @@ covers project creation with `skm new`, and doesn't go into as much detail about
- Overview: Tutorials on the syntax of CMake.
- Status: Needs Improvement/Checking
- - While these are well written, they don't have a whole lot to do with SplashKit aside from the
- final one (see below). Should they use and reference the SplashKit cmake file as an example each
- time?
+ - While these are well written, they don't have a whole lot to do with
+ SplashKit aside from the final one (see below). Should they use and
+ reference the SplashKit cmake file as an example each time?
- Repo Links:
[_SplashKit-Tutorial_](https://github.com/thoth-tech/SplashKit-Tutorial/blob/main/Tutorials/Cmake%20Tutorial/)
[_splashkit.io-starlight_](https://github.com/thoth-tech/splashkit.io-starlight/blob/master/src/content/docs/guides/Others/Cmake/)
- Proposal Repo Links:
[_SplashKit-Tutorial_](https://github.com/thoth-tech/SplashKit-Tutorial/blob/main/Tutorial%20Proposals/Building%20the%20SplashKit%20Core%20Library%20with%20CMake.md)
[_documentation_](https://github.com/thoth-tech/documentation/blob/main/docs/Splashkit/Applications/Tutorials%20and%20Research/Tutorial%20Proposals/Building%20the%20SplashKit%20Core%20Library%20with%20CMake.md)
-- Website Links: [_Live_](https://splashkit.io/guides/others/cmake/1-get-started/)
+- Website Links:
+ [_Live_](https://splashkit.io/guides/others/cmake/1-get-started/)
#### _CMake #9. Building the SplashKit Core Library with CMake_
@@ -409,17 +431,19 @@ covers project creation with `skm new`, and doesn't go into as much detail about
- Proposal Repo Links:
[_SplashKit-Tutorial_](https://github.com/thoth-tech/SplashKit-Tutorial/blob/main/Tutorial%20Proposals/Building%20the%20SplashKit%20Core%20Library%20with%20CMake.md)
[_documentation_](https://github.com/thoth-tech/documentation/blob/main/docs/Splashkit/Applications/Tutorials%20and%20Research/Tutorial%20Proposals/Building%20the%20SplashKit%20Core%20Library%20with%20CMake.md)
-- Website Links: [_Live_](https://splashkit.io/guides/others/cmake/9-cmake-with-splashkit/)
+- Website Links:
+ [_Live_](https://splashkit.io/guides/others/cmake/9-cmake-with-splashkit/)
#### _Publishing in SplashKit - C# / C++_
-- Overview: Short tutorial explaining how to publish a game made in SplashKit, with regards to
- assets and such.
+- Overview: Short tutorial explaining how to publish a game made in SplashKit,
+ with regards to assets and such.
- Status: Completed
- Repo Links:
[_documentation_](https://github.com/thoth-tech/documentation/blob/main/docs/Splashkit/Applications/Tutorials%20and%20Research/Tutorial%20Proposals/Tutorial%20Markdowns/Publishing%20with%20SplashKit%20-%20C%23)
[_splashkit.io-starlight_](https://github.com/thoth-tech/splashkit.io-starlight/blob/master/src/content/docs/guides/Others/Publishing%20with%20SplashKit%20Csharp.md)
-- Website Links: [_Live_](https://splashkit.io/guides/others/publishing-with-splashkit-csharp/)
+- Website Links:
+ [_Live_](https://splashkit.io/guides/others/publishing-with-splashkit-csharp/)
---
@@ -438,12 +462,14 @@ covers project creation with `skm new`, and doesn't go into as much detail about
#### _Game Concept Ideas_
-- Overview: An article on resources to help with coming up with game concepts and resources.
+- Overview: An article on resources to help with coming up with game concepts
+ and resources.
- Status: Needs improvement
- - While it's solid as an article, the way it presents itself as the first tutorial in a series
- (which as far as I can tell does not exist) makes its actual goal confusing. Unless further
- tutorials in this series will be made, I suggest re-writing parts of it to make it self
- contained. Is it actually a proposal? If so, why is it so long?
+ - While it's solid as an article, the way it presents itself as the first
+ tutorial in a series (which as far as I can tell does not exist) makes its
+ actual goal confusing. Unless further tutorials in this series will be made,
+ I suggest re-writing parts of it to make it self contained. Is it actually a
+ proposal? If so, why is it so long?
- Repo Links:
[_SplashKit-Tutorial_](https://github.com/thoth-tech/SplashKit-Tutorial/blob/main/Tutorials/Game%20Concept.md)
[_SplashKit-Tutorial_](https://github.com/thoth-tech/SplashKit-Tutorial/blob/main/Game%20Concept.md)
@@ -452,15 +478,16 @@ covers project creation with `skm new`, and doesn't go into as much detail about
- Overview: An article describing some principles of UX/UI
- Status: Needs improvement
- - Same thoughts as above. It's solid as an introductory article, but it isn't a tutorial. Again,
- perhaps it's a proposal?
+ - Same thoughts as above. It's solid as an introductory article, but it isn't
+ a tutorial. Again, perhaps it's a proposal?
- Repo Links:
[_SplashKit-Tutorial_](https://github.com/thoth-tech/SplashKit-Tutorial/blob/main/Tutorials/2d%20Racer%20Design%20tutorial.md)
[_SplashKit-Tutorial_](https://github.com/thoth-tech/SplashKit-Tutorial/blob/main/2d%20Racer%20Design%20tutorial.md)
#### _Controls_
-- Overview: A list of the controls needed to be compatible with the arcade machine. Not a tutorial.
+- Overview: A list of the controls needed to be compatible with the arcade
+ machine. Not a tutorial.
- Status: Needs improvement
- The file itself is fine, but it shouldn't be in tutorials.
- Repo Links:
@@ -483,8 +510,8 @@ covers project creation with `skm new`, and doesn't go into as much detail about
- Overview: Proposal for introduction to audio and music functions.
- Status: Incomplete
- - Seems to overlap with 'Get started with SplashKit Audio' but much more in-depth. Possibly a good
- replacement.
+ - Seems to overlap with 'Get started with SplashKit Audio' but much more
+ in-depth. Possibly a good replacement.
- Repo Links:
[_SplashKit-Tutorial_](https://github.com/thoth-tech/SplashKit-Tutorial/blob/main/Tutorial%20Proposals/Audio%20Series/Basic%20Audio%20Manipulation%20in%20Splashkit.md)
[_documentation_](https://github.com/thoth-tech/documentation/blob/main/docs/Splashkit/Applications/Tutorials%20and%20Research/Tutorial%20Proposals/Basic%20Audio%20Manipulation%20in%20Splashkit.md)
@@ -493,8 +520,8 @@ covers project creation with `skm new`, and doesn't go into as much detail about
- Overview: Covers sound and music resource management.
- Status: Incomplete
- - Seems to overlap with 'Get started with SplashKit Audio' but much more in-depth. Possibly a good
- replacement.
+ - Seems to overlap with 'Get started with SplashKit Audio' but much more
+ in-depth. Possibly a good replacement.
- Repo Links:
[_SplashKit-Tutorial_](https://github.com/thoth-tech/SplashKit-Tutorial/blob/main/Tutorial%20Proposals/Audio%20Series/Managing%20Audio%20Resources%20in%20Splashkit.md)
@@ -502,8 +529,8 @@ covers project creation with `skm new`, and doesn't go into as much detail about
- Overview: Covers specifically sound effects, playing them, etc.
- Status: Incomplete
- - Seems to overlap with 'Get started with SplashKit Audio' but much more in-depth. Possibly a good
- replacement.
+ - Seems to overlap with 'Get started with SplashKit Audio' but much more
+ in-depth. Possibly a good replacement.
- Repo Links:
[_SplashKit-Tutorial_](https://github.com/thoth-tech/SplashKit-Tutorial/blob/main/Tutorial%20Proposals/Audio%20Series/Working%20with%20Sound%20Effects%20in%20Splashkit.md)
@@ -519,8 +546,9 @@ covers project creation with `skm new`, and doesn't go into as much detail about
- Overview: Covers SplashKit project setup + other commands.
- Status: Probably incomplete.
- - Overlaps with 'Getting Started: C++, C#, Python, and Pascal - Windows'. See above. Also mostly
- already covered by 'Getting Started With SplashKit - Windows C#/C++'
+ - Overlaps with 'Getting Started: C++, C#, Python, and Pascal - Windows'. See
+ above. Also mostly already covered by 'Getting Started With SplashKit -
+ Windows C#/C++'
- Repo Links:
[_SplashKit-Tutorial_]()
[_documentation_]()
@@ -550,25 +578,27 @@ covers project creation with `skm new`, and doesn't go into as much detail about
- Overview: Covers installing MSYS2, SplashKit and VSCode + project setup.
- Status: Archived - too much overlap.
- - Overlaps with 'Understanding SplashKit Manager (SKM) Shell Commands'. See below. Also mostly
- already covered by 'Getting Started With SplashKit - Windows C#/C++'
+ - Overlaps with 'Understanding SplashKit Manager (SKM) Shell Commands'. See
+ below. Also mostly already covered by 'Getting Started With SplashKit -
+ Windows C#/C++'
- Repo Links:
[_SplashKit-Tutorial_](https://github.com/thoth-tech/SplashKit-Tutorial/blob/main/Tutorial%20Proposals/Getting%20Started%20in%20Splashkit%20Outline.md)
[_documentation_](https://github.com/thoth-tech/documentation/blob/main/docs/Splashkit/Applications/Tutorials%20and%20Research/Tutorial%20Proposals/Getting%20Started%20in%20Splashkit%20Outline.md)
## [For Reference] Current SplashKit Tutorial/Proposal Directories
-All the directories listed below contain either tutorials or proposals (or both). In its current
-state, tutorials and proposals seem to be mixed together and files from the same series are
-scattered across folders. There is also a lot of duplication.
+All the directories listed below contain either tutorials or proposals (or
+both). In its current state, tutorials and proposals seem to be mixed together
+and files from the same series are scattered across folders. There is also a lot
+of duplication.
The current plan for future tutorials seems to be to store all tutorials under
-[splashkit.io-starlight](https://github.com/thoth-tech/splashkit.io-starlight), and all tutorial
-proposals under
-[ThothTech-Documentation-Website](https://github.com/thoth-tech/ThothTech-Documentation-Website). We
-should attempt to migrate all tutorials to fit under that structure at some point. There are also
-some completed tutorials that are not currently live on any site - it should be investigated if
-there is a reason for this.
+[splashkit.io-starlight](https://github.com/thoth-tech/splashkit.io-starlight),
+and all tutorial proposals under
+[ThothTech-Documentation-Website](https://github.com/thoth-tech/ThothTech-Documentation-Website).
+We should attempt to migrate all tutorials to fit under that structure at some
+point. There are also some completed tutorials that are not currently live on
+any site - it should be investigated if there is a reason for this.
### Tutorials
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/07-Tutorial-JSON.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/07-Tutorial-JSON.mdx
index f492587d3..7ef6d5609 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/07-Tutorial-JSON.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Tutorials Documentation/07-Tutorial-JSON.mdx
@@ -7,9 +7,10 @@ sidebar:
import { FileTree } from "@astrojs/starlight/components";
-For the [splashkit.io](https://splashkit.io/) website, JSON files are now used to list all the
-functions used in the guides pages, to link them in to the API documentation pages for SplashKit
-users to see if a function has been demonstrated in any of the guides.
+For the [splashkit.io](https://splashkit.io/) website, JSON files are now used
+to list all the functions used in the guides pages, to link them in to the API
+documentation pages for SplashKit users to see if a function has been
+demonstrated in any of the guides.
Here is information about how this is set up:
@@ -45,14 +46,16 @@ Here is information about how this is set up:
-When creating a folder for a new category, it can be placed within the guides folder. The JSON file
-for a certain category should be the folder's name in lowercase.
+When creating a folder for a new category, it can be placed within the guides
+folder. The JSON file for a certain category should be the folder's name in
+lowercase.
### Content
-Each guides' entry need to contain a name, an array of the functions used, and the url to the guide
-page. It is important to use the function's **unique_global_name** to ensure the correct function is
-linked on the documentation site. Each new addition can be added consecutively using the curly
+Each guides' entry need to contain a name, an array of the functions used, and
+the url to the guide page. It is important to use the function's
+**unique_global_name** to ensure the correct function is linked on the
+documentation site. Each new addition can be added consecutively using the curly
brackets.
```json
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/01-overview.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/01-overview.mdx
index 6be550e90..31ba840c0 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/01-overview.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/01-overview.mdx
@@ -1,6 +1,8 @@
---
title: Usage Example Overview
-description: Learn how to create, submit, and review usage examples for the Splashkit website.
+description:
+ Learn how to create, submit, and review usage examples for the Splashkit
+ website.
sidebar:
label: Overview
---
@@ -10,16 +12,17 @@ import { LinkCard } from "@astrojs/starlight/components";
### What are Usage Examples?
-Usage examples demonstrate a specific Splashkit function within a simple, small program. The goal is
-to keep the program minimal while clearly showing how the function works.
+Usage examples demonstrate a specific Splashkit function within a simple, small
+program. The goal is to keep the program minimal while clearly showing how the
+function works.
For instance, the `write_line` example on the Splashkit site found
-[here](https://splashkit.io/api/terminal/#write-line), clicking on the `See Code Example` shows how
-the `write_line` function works, with a brief title of the program, program code, and a screenshot
-of the output.
+[here](https://splashkit.io/api/terminal/#write-line), clicking on the
+`See Code Example` shows how the `write_line` function works, with a brief title
+of the program, program code, and a screenshot of the output.
-The following pages cover all the steps to create, submit, and review usage examples for the
-Splashkit website.
+The following pages cover all the steps to create, submit, and review usage
+examples for the Splashkit website.
## Steps to Completing a Usage Example
@@ -27,30 +30,37 @@ Splashkit website.
1. ### Choose a Planner Card or Idea
- Pick an existing planner card that matches a SplashKit function, or create a unique example.
+ Pick an existing planner card that matches a SplashKit function, or create a
+ unique example.
- Make sure the idea clearly demonstrates the function in a practical or visually interesting way.
+ Make sure the idea clearly demonstrates the function in a practical or
+ visually interesting way.
2. ### Develop a Demonstrative Program
- Write a simple program to illustrate the function’s use. Focus on clarity and simplicity,
- ensuring that the code is easy to follow and that it showcases the function effectively.
+ Write a simple program to illustrate the function’s use. Focus on clarity and
+ simplicity, ensuring that the code is easy to follow and that it showcases
+ the function effectively.
3. ### Write the Program in Multiple Languages
- **C++**: Implement the example with standard C++ practices.
- - **C#**: Provide both a version using top-level statements and an Object-Oriented (OOP) version.
- - **Python**: Write a Python version that is straightforward and reflects the same functionality.
+ - **C#**: Provide both a version using top-level statements and an
+ Object-Oriented (OOP) version.
+ - **Python**: Write a Python version that is straightforward and reflects the
+ same functionality.
4. ### Create a Title for the example
- Think of a title that describes the overall functionality of the program.
- - Be creative with this title. It should not be the name of the function, or use the word
- "Example".
+ - Be creative with this title. It should not be the name of the function, or
+ use the word "Example".
5. ### Capture Output of Program
- Run the program in any of the languages to capture the details of the program running.
+ Run the program in any of the languages to capture the details of the program
+ running.
- You might do this with a screenshot, screen recording converted to a GIF, or an audio recording.
+ You might do this with a screenshot, screen recording converted to a GIF, or
+ an audio recording.
| Output Format | Accepted File Type | Example of when to use |
| ---------------- | ------------------ | --------------------------------------------------------- |
@@ -59,16 +69,18 @@ Splashkit website.
| Audio Recording | `.webm` file | The program include audio sounds. |
6. ### Add Files to the Correct Folder in the SplashKit Repository
- - Copy your files into the appropriate folder within the `splashkit.io-starlight` repository.
- - Ensure file names and directory paths are consistent with SplashKit’s structure.
+ - Copy your files into the appropriate folder within the
+ `splashkit.io-starlight` repository.
+ - Ensure file names and directory paths are consistent with SplashKit’s
+ structure.
7. ### Submit a Pull Request for Usage Examples
Follow the steps in the
[Pull Requests for Usage Examples guide](/products/splashkit/05-pull-request-template/).
- Use the provided PR template, describe your example clearly, and attach explanations and
- screenshots to the pull request.
+ Use the provided PR template, describe your example clearly, and attach
+ explanations and screenshots to the pull request.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/02-creating-usage-examples.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/02-creating-usage-examples.mdx
index 9799008ea..a64ce5cfe 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/02-creating-usage-examples.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/02-creating-usage-examples.mdx
@@ -1,6 +1,8 @@
---
title: Guide to creating usage examples
-description: Learn how to create, submit, and review usage examples for the Splashkit website.
+description:
+ Learn how to create, submit, and review usage examples for the Splashkit
+ website.
sidebar:
label: Creating Usage Examples
---
@@ -12,11 +14,13 @@ This guide explains how to create a usage example for the Splashkit website.
### What are Usage Examples?
-Usage examples demonstrate a specific Splashkit function within a simple, small program. The goal is
-to keep the program minimal while clearly showing how the function works or how it can be utilised.
+Usage examples demonstrate a specific Splashkit function within a simple, small
+program. The goal is to keep the program minimal while clearly showing how the
+function works or how it can be utilised.
-For instance, the `write_line` example on the Splashkit site shows how the `write_line` function
-works, with a relevant title, program code, and a file showing the program output.
+For instance, the `write_line` example on the Splashkit site shows how the
+`write_line` function works, with a relevant title, program code, and a file
+showing the program output.

@@ -33,32 +37,36 @@ An initial usage example includes 6 files:
1. ### Choosing a Function
- The planner board has numerous usage example suggestions for various functions, however you are
- welcomed and encouraged to come up with your own creative ideas for these. Scroll through the
- [API Documentation](https://splashkit.io/api/) to find various functions you could use for the
- usage example, when coming up with an idea, make sure to also check to see if anyone else has
- already done it.
+ The planner board has numerous usage example suggestions for various
+ functions, however you are welcomed and encouraged to come up with your own
+ creative ideas for these. Scroll through the
+ [API Documentation](https://splashkit.io/api/) to find various functions you
+ could use for the usage example, when coming up with an idea, make sure to
+ also check to see if anyone else has already done it.
- The goal is to create **one usage example per function** in the SplashKit library. Building out
- these single examples for each function is the priority to round out the variety of examples
- available.
+ The goal is to create **one usage example per function** in the SplashKit
+ library. Building out these single examples for each function is the priority
+ to round out the variety of examples available.
- If you come up with a creative idea for a function that already has an example, please share it
- in the SplashKit group chat or reach out to your Capstone Mentor to discuss whether it's a good
- idea to add another example for that function.
+ If you come up with a creative idea for a function that already has an
+ example, please share it in the SplashKit group chat or reach out to your
+ Capstone Mentor to discuss whether it's a good idea to add another example
+ for that function.
2. ### Creating The Program
- Create the program with as few lines of code as possible. You'll need to write the program in
- C++, C# (using top-level statements and OOP version), and Python, all using Splashkit. Start with
- the language you're most comfortable with, then convert it to the others.
+ Create the program with as few lines of code as possible. You'll need to
+ write the program in C++, C# (using top-level statements and OOP version),
+ and Python, all using Splashkit. Start with the language you're most
+ comfortable with, then convert it to the others.
- Note for C# to-level: Use `using static SplashKitSDK.SplashKit;`. For the OOP version: use
- `using SplashKitSDK;`.
+ Note for C# to-level: Use `using static SplashKitSDK.SplashKit;`. For the OOP
+ version: use `using SplashKitSDK;`.
#### Example
- If C++ is your strength, begin by creating a small program like this `dec_to_hex` example:
+ If C++ is your strength, begin by creating a small program like this
+ `dec_to_hex` example:
```cpp
#include "splashkit.h"
@@ -86,9 +94,10 @@ An initial usage example includes 6 files:
}
```
- This example includes meaningful comments that clearly explain each part of the program, which
- helps readers understand both the structure and function. When converting to C#, notice that the
- structure and comments are kept consistent, with syntax changes specific to the language:
+ This example includes meaningful comments that clearly explain each part of
+ the program, which helps readers understand both the structure and function.
+ When converting to C#, notice that the structure and comments are kept
+ consistent, with syntax changes specific to the language:
```csharp
using static SplashKitSDK.SplashKit;
@@ -117,20 +126,21 @@ An initial usage example includes 6 files:
**Title:** Simple Decimal to Hexadecimal Converter
- Take a screenshot of the output window and save it. If the screenshot is of the terminal window,
- please crop out any commands used to run the program.
+ Take a screenshot of the output window and save it. If the screenshot is of
+ the terminal window, please crop out any commands used to run the program.
4. ### Adding these files to Splashkit Starlight.io
- Now that you're done, you should have the following 6 files when completing an initial usage
- example PR:
+ Now that you're done, you should have the following 6 files when completing
+ an initial usage example PR:
- - txt file - C++ file - python file - C# (top-level statements) file - C# (OOP) file - Screenshot
- of the output
+ - txt file - C++ file - python file - C# (top-level statements) file - C#
+ (OOP) file - Screenshot of the output
-Now you'll need to rename these before adding them to the Splashkit Starlight.io repo.
+Now you'll need to rename these before adding them to the Splashkit Starlight.io
+repo.
#### Example
@@ -151,13 +161,15 @@ In a separate pull request, to add to existing examples:
write_line-1-hello-world-beyond.cpp
```
-Now you can add these files to the Splashkit Starlight.io repo. If when you add them you find there
-is already some files of a usage example using this same function, then you should increment the
-number in the file name by 1. Furthermore, it is important that you use the function's unique global
-name for the files names _(This only applies for functions that are overloaded.)_
+Now you can add these files to the Splashkit Starlight.io repo. If when you add
+them you find there is already some files of a usage example using this same
+function, then you should increment the number in the file name by 1.
+Furthermore, it is important that you use the function's unique global name for
+the files names _(This only applies for functions that are overloaded.)_
-Now that your files are named, add all of your files to the `usage-examples` folder under the
-appropriate function category (e.g., `terminal/`). The location in the Statlight.io repo is:
+Now that your files are named, add all of your files to the `usage-examples`
+folder under the appropriate function category (e.g., `terminal/`). The location
+in the Statlight.io repo is:
@@ -192,8 +204,9 @@ appropriate function category (e.g., `terminal/`). The location in the Statlight
-For more information regarding the proper file naming and placement can be found within the repo in
-the `CONTRIBUTING.mdx`. The location of the file can be found here:
+For more information regarding the proper file naming and placement can be found
+within the repo in the `CONTRIBUTING.mdx`. The location of the file can be found
+here:
@@ -206,6 +219,7 @@ the `CONTRIBUTING.mdx`. The location of the file can be found here:
## Next Steps
Now follow the steps in
-[Usage Example Pull Requests](/products/splashkit/05-pull-request-template/), and submit your usage
-example for review. For doing a peer review, follow the steps in
+[Usage Example Pull Requests](/products/splashkit/05-pull-request-template/),
+and submit your usage example for review. For doing a peer review, follow the
+steps in
[Peer Review Guide for Usage Examples](/products/splashkit/documentation/splashkit-website/usage-examples/04-usage-peer-review).
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/04-usage-peer-review.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/04-usage-peer-review.mdx
index e33b46e65..0160d8c88 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/04-usage-peer-review.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/04-usage-peer-review.mdx
@@ -1,6 +1,8 @@
---
title: How to do a Peer Review for Usage Examples
-description: Learn how to create, submit, and review usage examples for the Splashkit website.
+description:
+ Learn how to create, submit, and review usage examples for the Splashkit
+ website.
sidebar:
label: Usage Peer Review
---
@@ -8,8 +10,8 @@ sidebar:
## Peer Reviewing a Usage Example
Doing a peer review for usage examples follows the same process as the
-[peer review guide](/products/splashkit/06-peer-review). However, you will need to follow the steps
-for reviewing usage examples.
+[peer review guide](/products/splashkit/06-peer-review). However, you will need
+to follow the steps for reviewing usage examples.
When reviewing an example, follow these steps:
@@ -23,12 +25,13 @@ Compile/run each code file on your machine.
### 2. Analyse the effectiveness
-Consider the function being demonstrated and how it might be used by a SplashKit user.
+Consider the function being demonstrated and how it might be used by a SplashKit
+user.
- Does the code example demonstrate the usefulness of the function?
- Or, do the code example give more insight into how the function works?
-- Is the example simple enough for an SIT102 or SIT771 student to follow, while still being an
- interesting example?
+- Is the example simple enough for an SIT102 or SIT771 student to follow, while
+ still being an interesting example?
### 3. Compare and analyse the code
@@ -39,43 +42,50 @@ Check the structure and functionality of the code in each language.
- Does the code follow the
[style guide](/products/splashkit/documentation/splashkit-website/usage-examples/05-usage-example-style-guide)
requirements?
-- Is the Object Oriented C# code using the Object Oriented format where possible/relevant?
-- Is the python code using the correct function names? (These function names will often differ from
- other languages due to python not handling function overloads)
+- Is the Object Oriented C# code using the Object Oriented format where
+ possible/relevant?
+- Is the python code using the correct function names? (These function names
+ will often differ from other languages due to python not handling function
+ overloads)
### 4. Test in localhost
Check the example displays correctly in your local development environment.
- Does the website build successfully with `npm run build`?
-- Does the example display under the correct function when previewing the website with
- `npm run preview`?
+- Does the example display under the correct function when previewing the
+ website with `npm run preview`?
- Is the correct function highlighted in the example code?
### 5. Request changes
Add your review comments on the pull request on GitHub.
-- Add comments for individual lines, or groups of lines in the code files. This helps the original
- contributor to understand exactly what the issue is related to.
+- Add comments for individual lines, or groups of lines in the code files. This
+ helps the original contributor to understand exactly what the issue is related
+ to.
- Use professional language, and be polite but assertive.
-- Include the correct version of code where relevant, or any suggested improvements.
+- Include the correct version of code where relevant, or any suggested
+ improvements.
- Ask questions if something is confusing or unclear.
:::note
**First Peer Review:**
-- It is _very unlikely_ that you will approve a pull request on first review without any changes.
-- We all miss things in our own code, which is why peer reviews are so important.
+- It is _very unlikely_ that you will approve a pull request on first review
+ without any changes.
+- We all miss things in our own code, which is why peer reviews are so
+ important.
**Second Peer Review:**
-- This review is more likely to be able to be approved without changes being requested.
-- However, it is important to check that the first reviewer did not miss anything that _should have
- been found_ in the first review.
-- You might also have experience that allows you to consider other aspects that the first reviewer
- would not have noticed.
+- This review is more likely to be able to be approved without changes being
+ requested.
+- However, it is important to check that the first reviewer did not miss
+ anything that _should have been found_ in the first review.
+- You might also have experience that allows you to consider other aspects that
+ the first reviewer would not have noticed.
:::
@@ -83,29 +93,31 @@ Add your review comments on the pull request on GitHub.
Check for replies on the pull request from the original contributor.
-- These comments might be requesting more information about your suggested changes, so it is
- important to respond quickly.
-- Discussions may include some debate on the best way forward. Be open to the ideas suggested by the
- other person, and include evidence to back up your reasoning.
-- If needed, either the reviewer or the original contributor can reach out to your Mentor to help
- decide what the outcome should be.
-- Check for a comment from the original contributor that the requested changes have been made.
+- These comments might be requesting more information about your suggested
+ changes, so it is important to respond quickly.
+- Discussions may include some debate on the best way forward. Be open to the
+ ideas suggested by the other person, and include evidence to back up your
+ reasoning.
+- If needed, either the reviewer or the original contributor can reach out to
+ your Mentor to help decide what the outcome should be.
+- Check for a comment from the original contributor that the requested changes
+ have been made.
### 7. Review the pull request again
Repeat steps 1 - 4 above.
-- It is important to review the pull request thoroughly to ensure that the changes have not caused
- other issues.
-- You may notice other issues once changes have been made. This is okay, and is part of the
- development process.
-- Ensure that the pull request will be able to be merged upstream to the live site before you
- approve it.
+- It is important to review the pull request thoroughly to ensure that the
+ changes have not caused other issues.
+- You may notice other issues once changes have been made. This is okay, and is
+ part of the development process.
+- Ensure that the pull request will be able to be merged upstream to the live
+ site before you approve it.
### 8. Approve the pull request
-Once all the requested changes have been made, approve the pull request using the following template
-in the comment of the approving review:
+Once all the requested changes have been made, approve the pull request using
+the following template in the comment of the approving review:
```plaintext
# Peer Review
@@ -138,5 +150,5 @@ I've reviewed the ...
```
Ensure while doing peer reviews that you also follow the
-[Planner Board Etiquette](/products/splashkit/07-planner-board) for moving tasks through the
-process.
+[Planner Board Etiquette](/products/splashkit/07-planner-board) for moving tasks
+through the process.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/05-usage-example-style-guide.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/05-usage-example-style-guide.mdx
index e2309f2fb..9e6ede285 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/05-usage-example-style-guide.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/05-usage-example-style-guide.mdx
@@ -9,17 +9,19 @@ import { Steps, Tabs, TabItem } from "@astrojs/starlight/components";
## Code Style and Formatting Guide
-This guide aims to introduce new team members to some standard practices for contributing to
-splashkit, such as variable declarations, formatting across languages and general do's and don'ts
+This guide aims to introduce new team members to some standard practices for
+contributing to splashkit, such as variable declarations, formatting across
+languages and general do's and don'ts
## Naming Conventions
-These are general naming conventions for variables, functions, namespaces and more that should be
-used when creating usage examples for SplashKit.
+These are general naming conventions for variables, functions, namespaces and
+more that should be used when creating usage examples for SplashKit.
### 1. Variable declarations
-Variable declarations for SplashKit usage examples should be in the following format:
+Variable declarations for SplashKit usage examples should be in the following
+format:
| Language | Naming convention | Variable Example |
| -------- | ----------------- | ---------------------- |
@@ -103,7 +105,8 @@ variable_name = "This is snake case"
### 2. Function and Methods
-When calling and naming functions and methods using SplashKit we do the following:
+When calling and naming functions and methods using SplashKit we do the
+following:
| Language | Naming convention | Function/Method Example |
| -------------------- | ----------------- | ------------------------------------ |
@@ -114,19 +117,21 @@ When calling and naming functions and methods using SplashKit we do the followin
:::tip[Python function names]
-Python function names will often differ from other languages due to python not handling function
-overloads. The example above shows the difference for this overload of Refresh Screen.
+Python function names will often differ from other languages due to python not
+handling function overloads. The example above shows the difference for this
+overload of Refresh Screen.
-You can check the correct function signature in the [API Documentation](https://splashkit.io/api/)
-pages.
+You can check the correct function signature in the
+[API Documentation](https://splashkit.io/api/) pages.
:::
### 3. C# Namespaces (Object-Oriented)
-The namespace for a usage example in C# (Object-Oriented version) should be written in Pascal case,
-and will follow the convention of: The name of the function being demonstrated, with the word
-"Example" added on the end, also written in Pascal case.
+The namespace for a usage example in C# (Object-Oriented version) should be
+written in Pascal case, and will follow the convention of: The name of the
+function being demonstrated, with the word "Example" added on the end, also
+written in Pascal case.
**For example**:
@@ -141,8 +146,9 @@ namespace DrawRectangleExample
### 4. Window Names
-When using windows to display any graphics the window name should be a description of what is
-happening on the window rather than the name of the function you are displaying.
+When using windows to display any graphics the window name should be a
+description of what is happening on the window rather than the name of the
+function you are displaying.
| Good Window Name | Bad Window Name |
| ---------------------------------------- | -------------------------------------- |
@@ -153,10 +159,11 @@ happening on the window rather than the name of the function you are displaying.
### Code Comments
-Code comments should be used to explain the _why_ behind code, not the _what_. Code comments should
-be clear and concise
+Code comments should be used to explain the _why_ behind code, not the _what_.
+Code comments should be clear and concise
-A good Comment should explain the intent or purpose A bad comment will state the obvious
+A good Comment should explain the intent or purpose A bad comment will state the
+obvious
| Do | Don't |
| --------------------------------------------- | -------------------------------------------------------------------------------- |
@@ -189,9 +196,10 @@ if (playerHealth <= 0)
### Braces and Indentation
-- **Braces:** Place braces (curly brackets) on the line following the declaration.
-- **Indentation:** Between any new pair of curly brackets (`{`, `}`), write your code **4** further
- spaces in from the left.
+- **Braces:** Place braces (curly brackets) on the line following the
+ declaration.
+- **Indentation:** Between any new pair of curly brackets (`{`, `}`), write your
+ code **4** further spaces in from the left.
**Do**:
@@ -213,7 +221,8 @@ int main() {
### If Statements
- The if statement condition should be on its own line.
-- Use braces, even if just 1 line below, as this is easier for beginners to read.
+- Use braces, even if just 1 line below, as this is easier for beginners to
+ read.
**Do**:
@@ -232,8 +241,8 @@ if (mouse_clicked(LEFT_BUTTON)) x-= SPEED;
### Graphical Examples
-- If your program is using a Graphical Window, any text information should be displayed on the
- Window.
+- If your program is using a Graphical Window, any text information should be
+ displayed on the Window.
- Do not use terminal outputs when using a graphics window in your program.
### Looping
@@ -244,13 +253,14 @@ if (mouse_clicked(LEFT_BUTTON)) x-= SPEED;
### Use simple code
-As SplashKit usage examples are targeted to beginners, so it's important to keep the code as simple
-and readable as possible.
+As SplashKit usage examples are targeted to beginners, so it's important to keep
+the code as simple and readable as possible.
Avoid using advanced features such as:
-- **Ternary statements/operators** (`condition ? value-if-true : value-if-false`), and instead,
- stick to traditional if/else statements.
+- **Ternary statements/operators**
+ (`condition ? value-if-true : value-if-false`), and instead, stick to
+ traditional if/else statements.
## C# OOP vs Top Level
@@ -258,16 +268,16 @@ Avoid using advanced features such as:
**Top-Level Statements**:
-- Top-level statements allow you to write C# code without explicitly defining a class or `Main`
- method.
-- Uses the directive: `using static SplashKitSDK.SplashKit;`, so SplashKit functions are called
- directly, such as `WriteLine("Hello!");`.
+- Top-level statements allow you to write C# code without explicitly defining a
+ class or `Main` method.
+- Uses the directive: `using static SplashKitSDK.SplashKit;`, so SplashKit
+ functions are called directly, such as `WriteLine("Hello!");`.
**Object-Oriented Programming (OOP)**:
- OOP-style C# code requires defining a `Main` method inside a class.
-- Uses `using SplashKitSDK;`, meaning all SplashKit commands are prefixed with `SplashKit.` (e.g.,
- `SplashKit.WriteLine("Hello!");`).
+- Uses `using SplashKitSDK;`, meaning all SplashKit commands are prefixed with
+ `SplashKit.` (e.g., `SplashKit.WriteLine("Hello!");`).
### Some key differences
@@ -277,8 +287,9 @@ When converting to OOP you should try and create and use objects where possible.
:::
-Some key differences to note when converting between OOP and top level statements are that OOP
-should be seeking to highlight the objectivity where possible.
+Some key differences to note when converting between OOP and top level
+statements are that OOP should be seeking to highlight the objectivity where
+possible.
Note the code examples below:
@@ -320,15 +331,17 @@ Delay(5000);
CloseWindow(window);
```
-The two code blocks here are functionally identical, but highlight some key differences between top
-level code and OOP code.
+The two code blocks here are functionally identical, but highlight some key
+differences between top level code and OOP code.
-Notice in the OOP code, we create the `rectangle` and `window` objects, and then perform the drawing
-of the rectangles through the objects' methods. In the top level code however we are calling
-functions and passing in the object as parameters.
+Notice in the OOP code, we create the `rectangle` and `window` objects, and then
+perform the drawing of the rectangles through the objects' methods. In the top
+level code however we are calling functions and passing in the object as
+parameters.
-Another key difference here, in OOP we call the color class `Color` and specify the colour we want
-via the member `Red` whereas in Top Level code we call the function `ColorRed()`.
+Another key difference here, in OOP we call the color class `Color` and specify
+the colour we want via the member `Red` whereas in Top Level code we call the
+function `ColorRed()`.
:::tip[Using SplashKit.OpenWindow]
@@ -473,7 +486,8 @@ WriteLine("Hello, " + name + "!");
:::note[Namespace naming]
- In the namespace name, replace `FunctionName` with the name of the function, using PascalCase.
+ In the namespace name, replace `FunctionName` with the name of the function,
+ using PascalCase.
:::
@@ -485,16 +499,17 @@ WriteLine("Hello, " + name + "!");
:::note[Variables with SplashKit types]
- If your code has variables with a SplashKit data type, such as `Window`, `Bitmap`, etc, then you
- will need to include `using SplashKitSDK;` as well.
+ If your code has variables with a SplashKit data type, such as `Window`,
+ `Bitmap`, etc, then you will need to include `using SplashKitSDK;` as well.
- You can then comment this line out, and check that only the SplashKit variables have error
- squiggles, then uncomment the line again.
+ You can then comment this line out, and check that only the SplashKit
+ variables have error squiggles, then uncomment the line again.
:::
- Remove `namespace`, `class`, and `Main` method wrappers.
- Remove `SplashKit.` prefixes.
-Using this guide, you can quickly convert between top-level statements and OOP formats in SplashKit
-tutorials, making the code accessible for different programming preferences.
+Using this guide, you can quickly convert between top-level statements and OOP
+formats in SplashKit tutorials, making the code accessible for different
+programming preferences.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/06-updating-examples.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/06-updating-examples.mdx
index 57d1615f3..cb2559037 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/06-updating-examples.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Usage Examples/06-updating-examples.mdx
@@ -1,8 +1,8 @@
---
title: Updating Usage Examples by Adding a Missing Language
description:
- Learn how to update an existing usage example by adding a missing language to the Splashkit
- website.
+ Learn how to update an existing usage example by adding a missing language to
+ the Splashkit website.
sidebar:
label: Updating Usage Examples
draft: true
@@ -11,12 +11,13 @@ draft: true
import { Tabs, TabItem } from "@astrojs/starlight/components";
import { FileTree } from "@astrojs/starlight/components";
-This guide explains how to add a missing language to an existing usage example for the Splashkit
-website.
+This guide explains how to add a missing language to an existing usage example
+for the Splashkit website.
### Steps to Update a Usage Example with a Missing Language
-When updating an existing usage example with a missing language, your contribution should include:
+When updating an existing usage example with a missing language, your
+contribution should include:
1. Code in the missing language(s)
2. Consistent comments and formatting across all language versions
@@ -28,17 +29,20 @@ When updating an existing usage example with a missing language, your contributi
1. ### Reviewing Existing Examples
- Planner cards are typically created to update existing examples with missing languages. Review
- the planner board to find examples that need additional languages to be added.
+ Planner cards are typically created to update existing examples with missing
+ languages. Review the planner board to find examples that need additional
+ languages to be added.
- For instance, if an example of `dec_to_hex` exists in C++ and Python but lacks a C# version and
- there is a planner card for it, you can add the missing C# version.
+ For instance, if an example of `dec_to_hex` exists in C++ and Python but
+ lacks a C# version and there is a planner card for it, you can add the
+ missing C# version.
2. ### Writing the Program in the Missing Language
- Write the program in the missing language while maintaining consistency with the other versions.
- Follow similar structure, naming conventions, and commenting styles. Below is an example for
- adding C# to an existing C++ example:
+ Write the program in the missing language while maintaining consistency with
+ the other versions. Follow similar structure, naming conventions, and
+ commenting styles. Below is an example for adding C# to an existing C++
+ example:
**Original C++ Code:**
@@ -70,7 +74,8 @@ When updating an existing usage example with a missing language, your contributi
**C# Version (Top-level statements):**
- Note, for C# top-level statements, use `using static SplashKitSDK.SplashKit;`.
+ Note, for C# top-level statements, use
+ `using static SplashKitSDK.SplashKit;`.
```csharp
using static SplashKitSDK.SplashKit;
@@ -129,18 +134,20 @@ When updating an existing usage example with a missing language, your contributi
```
- See how the structure and comments are consistent across all versions, with syntax changes
- specific to the language. This consistency helps readers understand the program's structure and
- function. If you need some help in understanding OOP and top level statements in C#, refer to the
+ See how the structure and comments are consistent across all versions, with
+ syntax changes specific to the language. This consistency helps readers
+ understand the program's structure and function. If you need some help in
+ understanding OOP and top level statements in C#, refer to the
[OOP Styling](/products/splashkit/documentation/splashkit-website/tutorials-documentation/04-oop-styling)
guide.
3. ### Naming Files for the New Language Version
- Rename the files according to Splashkit’s conventions. Use the same naming pattern as the
- existing example.
+ Rename the files according to Splashkit’s conventions. Use the same naming
+ pattern as the existing example.
- For instance, if you add C# to a `write_line` example, you would name the two files as:
+ For instance, if you add C# to a `write_line` example, you would name the two
+ files as:
```plaintext
write_line-1-example-top-level.cs
@@ -149,8 +156,8 @@ When updating an existing usage example with a missing language, your contributi
4. ### Adding the New Files to the Splashkit Starlight.io Repo
- Add the new files to the `usage-example` folder under the appropriate function category,
- following the existing file structure:
+ Add the new files to the `usage-example` folder under the appropriate
+ function category, following the existing file structure:
@@ -162,8 +169,8 @@ When updating an existing usage example with a missing language, your contributi
-For detailed naming and placement rules, refer to [CONTRIBUTING.md] the `CONTRIBUTING.mdx` file
-which can be found in the SplashKit Startlight. Repo at:
+For detailed naming and placement rules, refer to [CONTRIBUTING.md] the
+`CONTRIBUTING.mdx` file which can be found in the SplashKit Startlight. Repo at:
@@ -176,6 +183,6 @@ which can be found in the SplashKit Startlight. Repo at:
### Next Steps
After adding the files, follow the steps in
-[Usage Example Pull Requests](/products/splashkit/05-pull-request-template/) to submit for review.
-For peer review, refer to the
+[Usage Example Pull Requests](/products/splashkit/05-pull-request-template/) to
+submit for review. For peer review, refer to the
[Peer Review Guide for Usage Examples](/products/splashkit/documentation/splashkit-website/usage-examples/04-usage-peer-review).
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/01-splashkit-website-overview.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/01-splashkit-website-overview.mdx
index 1edf18a74..338e78d47 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/01-splashkit-website-overview.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/01-splashkit-website-overview.mdx
@@ -8,79 +8,90 @@ sidebar:
## Introduction
-The SplashKit website is the gateway to empowering learners and developers with the tools they need
-as they begin their journey in programming and game development. Having a well-designed, accessible,
-and user-friendly website is important for new learners, as it ensures that resources, tutorials,
-and documentation are easily accessible to everyone, regardless of their level of experience or any
-accessibility challenges they may face.
+The SplashKit website is the gateway to empowering learners and developers with
+the tools they need as they begin their journey in programming and game
+development. Having a well-designed, accessible, and user-friendly website is
+important for new learners, as it ensures that resources, tutorials, and
+documentation are easily accessible to everyone, regardless of their level of
+experience or any accessibility challenges they may face.
-As a team, we’re focused on creating a site that reflects the inclusivity and innovation of the
-SplashKit SDK, making sure everyone can get the most out of what we offer.
+As a team, we’re focused on creating a site that reflects the inclusivity and
+innovation of the SplashKit SDK, making sure everyone can get the most out of
+what we offer.
---
## What We're Working Towards
-As part of the SplashKit Website Development Team, you’ll be helping us build, refine, and enhance
-the user experience on the site. Some of the broader goals we’re working towards include:
-
-- **Enhancing the Onboarding Experience**: We aim to simplify the onboarding process for new users
- and contributors, making it easy for them to find the resources they need.
-- **Fixing and Expanding Accessibility of Tutorials**: We are always improving our tutorials and
- guides, ensuring they’re up-to-date and accessible to learners.
-- **Showcasing Community Contributions**: We want to highlight the incredible projects created using
- SplashKit by building a dedicated showcase section.
-- **Upholding Accessibility Standards**: Ensuring our site meets and exceeds accessibility standards
- so that it’s usable by everyone, including those with disabilities.
-- **Integrating SplashKit Online**: Integrating the SplashKit Online tool will significantly enhance
- the accessibility and effectiveness of the SplashKit resources, enabling learners to preview
- SplashKit functions directly within the website.
-
-These goals are collaborative efforts, and your contributions will help turn them into real features
-that impact the SplashKit community directly.
+As part of the SplashKit Website Development Team, you’ll be helping us build,
+refine, and enhance the user experience on the site. Some of the broader goals
+we’re working towards include:
+
+- **Enhancing the Onboarding Experience**: We aim to simplify the onboarding
+ process for new users and contributors, making it easy for them to find the
+ resources they need.
+- **Fixing and Expanding Accessibility of Tutorials**: We are always improving
+ our tutorials and guides, ensuring they’re up-to-date and accessible to
+ learners.
+- **Showcasing Community Contributions**: We want to highlight the incredible
+ projects created using SplashKit by building a dedicated showcase section.
+- **Upholding Accessibility Standards**: Ensuring our site meets and exceeds
+ accessibility standards so that it’s usable by everyone, including those with
+ disabilities.
+- **Integrating SplashKit Online**: Integrating the SplashKit Online tool will
+ significantly enhance the accessibility and effectiveness of the SplashKit
+ resources, enabling learners to preview SplashKit functions directly within
+ the website.
+
+These goals are collaborative efforts, and your contributions will help turn
+them into real features that impact the SplashKit community directly.
---
## What You Can Expect
-By joining the SplashKit Website Development Team, you’ll be stepping into a collaborative,
-open-source environment where everyone’s contributions are valued, with your contributions being
-experienced by users today. Here’s what you can expect from working with us:
-
-- **Collaboration and Support**: You’ll work with a team of dedicated contributors who will support
- you while you learn how to contribute to an exciting tool. We believe in sharing knowledge and
- improving together.
-- **Learning Opportunities**: Whether you're new to web development or an experienced contributor,
- you'll gain hands-on experience with modern tools like Astro, Starlight, and web frameworks.
- You’ll also learn about accessibility best practices, design principles, and performance
- optimisation.
-- **Creative Freedom**: We encourage creative solutions and welcome new ideas. As a contributor,
- you'll have the opportunity to shape the direction of the website, suggest improvements, and see
- your work in action.
-- **Peer Feedback and Reviews**: You’ll receive valuable feedback on your code through peer reviews,
- helping you grow as a developer while maintaining the quality of the website.
+By joining the SplashKit Website Development Team, you’ll be stepping into a
+collaborative, open-source environment where everyone’s contributions are
+valued, with your contributions being experienced by users today. Here’s what
+you can expect from working with us:
+
+- **Collaboration and Support**: You’ll work with a team of dedicated
+ contributors who will support you while you learn how to contribute to an
+ exciting tool. We believe in sharing knowledge and improving together.
+- **Learning Opportunities**: Whether you're new to web development or an
+ experienced contributor, you'll gain hands-on experience with modern tools
+ like Astro, Starlight, and web frameworks. You’ll also learn about
+ accessibility best practices, design principles, and performance optimisation.
+- **Creative Freedom**: We encourage creative solutions and welcome new ideas.
+ As a contributor, you'll have the opportunity to shape the direction of the
+ website, suggest improvements, and see your work in action.
+- **Peer Feedback and Reviews**: You’ll receive valuable feedback on your code
+ through peer reviews, helping you grow as a developer while maintaining the
+ quality of the website.
---
## What You’ll Gain
-As part of the team, you’ll gain practical experience working on a large-scale, open-source project,
-giving you insights into web development, accessibility, and content management. Here’s what else
-you can expect:
-
-- **Experience in Open-Source Development**: By contributing to a real-world project, you'll improve
- your coding skills, learn best practices, and collaborate with others, which is a valuable
- experience for your personal and professional development.
-- **Portfolio Building**: Your work on the SplashKit website will be visible to the public, giving
- you tangible examples to include in your portfolio or showcase in job interviews.
-- **Contribution to the Community**: You’ll be making a direct impact the learning journey of
- stduents (and even more experienced developers) who rely on SplashKit to build their skills and
- projects.
+As part of the team, you’ll gain practical experience working on a large-scale,
+open-source project, giving you insights into web development, accessibility,
+and content management. Here’s what else you can expect:
+
+- **Experience in Open-Source Development**: By contributing to a real-world
+ project, you'll improve your coding skills, learn best practices, and
+ collaborate with others, which is a valuable experience for your personal and
+ professional development.
+- **Portfolio Building**: Your work on the SplashKit website will be visible to
+ the public, giving you tangible examples to include in your portfolio or
+ showcase in job interviews.
+- **Contribution to the Community**: You’ll be making a direct impact the
+ learning journey of stduents (and even more experienced developers) who rely
+ on SplashKit to build their skills and projects.
---
## Ready to Get Started?
-Check out the support documentation to get familiar with the project structure and how to best
-contribute to the team. We’re excited to have you on the team, and we look forward to building
-something amazing together!
+Check out the support documentation to get familiar with the project structure
+and how to best contribute to the team. We’re excited to have you on the team,
+and we look forward to building something amazing together!
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/02-web-dev-files.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/02-web-dev-files.mdx
index b6bec083a..1e712eb2d 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/02-web-dev-files.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/02-web-dev-files.mdx
@@ -4,10 +4,11 @@ sidebar:
label: Key Web Development Files
---
-This document will help you understand the core files involved in the SplashKit website development
-process. It provides guidance on which files to update when contributing and how to handle different
-file types in peer reviews. This is not an exhaustive list, however should direct you to the key
-files and file types when contributing.
+This document will help you understand the core files involved in the SplashKit
+website development process. It provides guidance on which files to update when
+contributing and how to handle different file types in peer reviews. This is not
+an exhaustive list, however should direct you to the key files and file types
+when contributing.
## Contents
@@ -41,14 +42,15 @@ files and file types when contributing.
### `package.json` and `package-lock.json`
-- **Purpose**: `package.json` defines the project’s dependencies, metadata, and scripts, while
- `package-lock.json` locks the versions of the dependencies to ensure consistency across different
- environments.
+- **Purpose**: `package.json` defines the project’s dependencies, metadata, and
+ scripts, while `package-lock.json` locks the versions of the dependencies to
+ ensure consistency across different environments.
- **Installing Dependencies**:
- When you run `npm install `, the new package is added to the `package.json` under the
- `dependencies` or `devDependencies` section. It also updates `package-lock.json` to lock the
- specific versions of the installed dependencies.
+ When you run `npm install `, the new package is added to the
+ `package.json` under the `dependencies` or `devDependencies` section. It also
+ updates `package-lock.json` to lock the specific versions of the installed
+ dependencies.
**Example**:
@@ -66,32 +68,34 @@ files and file types when contributing.
}
```
- It will also update `package-lock.json` to include detailed version information for `react` and
- any transitive dependencies to ensure consistency across environments.
+ It will also update `package-lock.json` to include detailed version
+ information for `react` and any transitive dependencies to ensure consistency
+ across environments.
-In most cases, you won’t need to manually edit `package.json` or `package-lock.json`. If you want to
-remove a dependency, it’s best to use:
+In most cases, you won’t need to manually edit `package.json` or
+`package-lock.json`. If you want to remove a dependency, it’s best to use:
```shell
npm uninstall
```
-This will automatically remove the package from both `package.json` and `package-lock.json`. By
-doing this, the project stays clean and free of unused dependencies.
+This will automatically remove the package from both `package.json` and
+`package-lock.json`. By doing this, the project stays clean and free of unused
+dependencies.
### `astro.config.mjs`
-**Purpose**: This config file defines the core settings for the Astro project, such as integrations,
-site metadata, and base URLs.
+**Purpose**: This config file defines the core settings for the Astro project,
+such as integrations, site metadata, and base URLs.
-**Updates**: You may need to modify this file when adding integrations (e.g., Google Analytics) or
-updating site-wide settings such as title, description, or site configuration. Below are some
-specific examples:
+**Updates**: You may need to modify this file when adding integrations (e.g.,
+Google Analytics) or updating site-wide settings such as title, description, or
+site configuration. Below are some specific examples:
#### Site Metadata
-- **Why Update?**: You may want to change the global title, description, or other metadata for
- search engine optimisation and branding purposes.
+- **Why Update?**: You may want to change the global title, description, or
+ other metadata for search engine optimisation and branding purposes.
- **Example**:
```js
@@ -107,13 +111,13 @@ specific examples:
};
```
- Update `site` or `base` when the site’s URL changes or if you want to modify markdown handling
- (affecting headings, tables, etc.).
+ Update `site` or `base` when the site’s URL changes or if you want to modify
+ markdown handling (affecting headings, tables, etc.).
#### Integrations
-- **Why Update?**: If you’re adding design frameworks like TailwindCSS or optimising images through
- Astro.
+- **Why Update?**: If you’re adding design frameworks like TailwindCSS or
+ optimising images through Astro.
- **Example**: Integrating TailwindCSS for utility-first styling.
```js
@@ -124,12 +128,13 @@ specific examples:
};
```
- This allows you to quickly apply Tailwind's design utilities across your project.
+ This allows you to quickly apply Tailwind's design utilities across your
+ project.
#### Customising Vite
-- **Why Update?**: When you need to customise the build pipeline to handle assets (images, fonts)
- more efficiently.
+- **Why Update?**: When you need to customise the build pipeline to handle
+ assets (images, fonts) more efficiently.
- **Example**: Adjust Vite settings to optimise or transform images.
```js
@@ -142,13 +147,13 @@ specific examples:
};
```
- This controls when assets are inlined versus being loaded separately, which impacts performance
- and design choices.
+ This controls when assets are inlined versus being loaded separately, which
+ impacts performance and design choices.
#### Page Transitions and Animations
-- **Why Update?**: If you're adding smooth page transitions or animations between pages to improve
- UX.
+- **Why Update?**: If you're adding smooth page transitions or animations
+ between pages to improve UX.
- **Example**: Adding an integration for animations using Astro Motion.
```js
@@ -182,15 +187,17 @@ specific examples:
### `.astro` Files
-- **Purpose**: `.astro` files define the structure of pages and components. They are commonly used
- for building both static and dynamic website content.
+- **Purpose**: `.astro` files define the structure of pages and components. They
+ are commonly used for building both static and dynamic website content.
-- **What to update**: Add or modify pages in `src/pages/` or components in `src/components/`.
+- **What to update**: Add or modify pages in `src/pages/` or components in
+ `src/components/`.
-- **Scenario**: You need to update the sidebar to include a new section in the website's navigation.
+- **Scenario**: You need to update the sidebar to include a new section in the
+ website's navigation.
-- **How to update**: Modify the existing `Sidebar.astro` component to add a new navigation link
- pointing to the new section.
+- **How to update**: Modify the existing `Sidebar.astro` component to add a new
+ navigation link pointing to the new section.
```astro
```
-- **Explanation**: In this example, a new `
` element with the link to `/new-section` is added to
- the sidebar's navigation, allowing users to navigate to the newly created page.
-- **When to update**: Edit `.astro` files when changing page layouts, adding components, or
- modifying the site’s structure. \*/}
+- **Explanation**: In this example, a new `
` element with the link to
+ `/new-section` is added to the sidebar's navigation, allowing users to
+ navigate to the newly created page.
+- **When to update**: Edit `.astro` files when changing page layouts, adding
+ components, or modifying the site’s structure. \*/}
---
### `.jsx` and `.tsx` Files
-- **Purpose**: Handles interactivity and dynamic content through React components.
+- **Purpose**: Handles interactivity and dynamic content through React
+ components.
-- **What to update**: If adding or editing interactive elements like carousels, forms, or sliders,
- modify these files. They’re typically located in `src/components/react/`.
+- **What to update**: If adding or editing interactive elements like carousels,
+ forms, or sliders, modify these files. They’re typically located in
+ `src/components/react/`.
---
### `.css` Files
-- **Purpose**: CSS files manage the website's styles, such as layout, typography, colour palettes,
- and spacing. Common styles are defined in files like `src/styles/custom.css`, which holds the
- custom styles for the website.
+- **Purpose**: CSS files manage the website's styles, such as layout,
+ typography, colour palettes, and spacing. Common styles are defined in files
+ like `src/styles/custom.css`, which holds the custom styles for the website.
-- **What to update**: You can modify styles in files like `custom.css` or create new
- component-specific styles depending on the changes being made to the design or layout.
+- **What to update**: You can modify styles in files like `custom.css` or create
+ new component-specific styles depending on the changes being made to the
+ design or layout.
#### Example 1: Adjusting the colour palette in `custom.css`
-- **Scenario**: You need to update the website’s colour scheme to match a new brand identity.
-- **How to update**: Modify CSS variables in `custom.css` to reflect the new colours.
+- **Scenario**: You need to update the website’s colour scheme to match a new
+ brand identity.
+- **How to update**: Modify CSS variables in `custom.css` to reflect the new
+ colours.
```css
:root {
@@ -245,9 +258,10 @@ specific examples:
#### Example 2: Adding custom styles for a specific component
-- **Scenario**: You want to add specific styles for a newly created `Button.astro` component.
-- **How to update**: Add styles to `custom.css` for the `button` class or ID specific to the
- component.
+- **Scenario**: You want to add specific styles for a newly created
+ `Button.astro` component.
+- **How to update**: Add styles to `custom.css` for the `button` class or ID
+ specific to the component.
```css
.button-primary {
@@ -260,8 +274,9 @@ specific examples:
}
```
-- **When to update**: Modify CSS files when making changes to the overall design (e.g., colour
- palette, typography) or when specific components need unique styles.
+- **When to update**: Modify CSS files when making changes to the overall design
+ (e.g., colour palette, typography) or when specific components need unique
+ styles.
---
@@ -269,31 +284,38 @@ specific examples:
### Asset Directories
-- **Purpose**: The asset directories store static files such as images, gifs, and config files.
- Proper organisation ensures that assets are easy to find and use in the project.
+- **Purpose**: The asset directories store static files such as images, gifs,
+ and config files. Proper organisation ensures that assets are easy to find and
+ use in the project.
- **Directory Structure**:
- - **GIFs**: Store all general-purpose gifs in `public/gifs/`, regardless of content. However, if
- the gif is used as part of a usage example (such as in a tutorial or documentation), store it in
+ - **GIFs**: Store all general-purpose gifs in `public/gifs/`, regardless of
+ content. However, if the gif is used as part of a usage example (such as in
+ a tutorial or documentation), store it in
`public/usage-examples-images-gifs/`.
- - **Config Files**: Any configuration files that are generated from scripts (e.g.,
- `games-config.json`) should be placed directly in the `public/` folder. This ensures they are
- accessible for any part of the project that needs them.
+ - **Config Files**: Any configuration files that are generated from scripts
+ (e.g., `games-config.json`) should be placed directly in the `public/`
+ folder. This ensures they are accessible for any part of the project that
+ needs them.
- - **Static Images**: Other static assets, such as images associated with tutorials, should be
- stored in the same folder as the corresponding `.mdx` file. This keeps the related assets
- organised with the content they belong to.
+ - **Static Images**: Other static assets, such as images associated with
+ tutorials, should be stored in the same folder as the corresponding `.mdx`
+ file. This keeps the related assets organised with the content they belong
+ to.
- **Example**:
- - A gif demonstrating a tutorial should be placed in `public/usage-examples-images-gifs/`.
- - An image used in a physics tutorial stored in `src/content/docs/guides/physics/images` should
- sit alongside the tutorial file in that same folder.
+ - A gif demonstrating a tutorial should be placed in
+ `public/usage-examples-images-gifs/`.
+ - An image used in a physics tutorial stored in
+ `src/content/docs/guides/physics/images` should sit alongside the tutorial
+ file in that same folder.
---
## `.mdx` Files
-- **Purpose**: Markdown files with embedded components for creating content pages.
+- **Purpose**: Markdown files with embedded components for creating content
+ pages.
- **What to update**:
- Add new content in `src/content/docs/`.
@@ -308,25 +330,29 @@ When performing peer reviews, here’s what you should check for in various file
### `.mdx` Files
- **Content Accuracy**: Ensure the information is correct and well-organised.
-- **Frontmatter**: Check that the frontmatter is complete (e.g., `title`, `description`).
-- **Component Usage**: Ensure embedded components are used properly (e.g., `LinkCard` or
- `CardGrid`).
+- **Frontmatter**: Check that the frontmatter is complete (e.g., `title`,
+ `description`).
+- **Component Usage**: Ensure embedded components are used properly (e.g.,
+ `LinkCard` or `CardGrid`).
### `.css` Files
-- **Consistency**: Ensure that styles follow the project’s styling conventions (e.g., consistent use
- of variables for colours, fonts, and spacing).
+- **Consistency**: Ensure that styles follow the project’s styling conventions
+ (e.g., consistent use of variables for colours, fonts, and spacing).
- **Naming Conventions**: Ensure class names follow a consistent naming pattern.
### `.jsx`/`.tsx` Files
- **Functionality**: Ensure that the component works as expected.
-- **Performance**: Look for unnecessary re-renders or inefficiencies in React component updates.
+- **Performance**: Look for unnecessary re-renders or inefficiencies in React
+ component updates.
- **Code Style**: Ensure it follows SplashKit's linting rules.
### `.astro` Files
-- **Structure**: Ensure the page/component is well-structured and follows best practices.
-- **Reusability**: Consider whether code could be refactored into reusable components.
+- **Structure**: Ensure the page/component is well-structured and follows best
+ practices.
+- **Reusability**: Consider whether code could be refactored into reusable
+ components.
---
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/03-css-styling-guide.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/03-css-styling-guide.mdx
index ab9c1df24..dc134b30a 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/03-css-styling-guide.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/03-css-styling-guide.mdx
@@ -4,8 +4,8 @@ sidebar:
label: CSS Styling Guide
---
-This document outlines the core styling principles for the SplashKit website to ensure consistency,
-accessibility, and a cohesive design.
+This document outlines the core styling principles for the SplashKit website to
+ensure consistency, accessibility, and a cohesive design.
## Contents
@@ -33,15 +33,16 @@ accessibility, and a cohesive design.
## Colour Palette
-- **Primary Colours**: SplashKit uses a consistent colour palette to maintain a clean, cohesive
- design. You are encouraged to use accessible, high-contrast colours that support both branding and
- readability.
+- **Primary Colours**: SplashKit uses a consistent colour palette to maintain a
+ clean, cohesive design. You are encouraged to use accessible, high-contrast
+ colours that support both branding and readability.
### Tools for Colour Palette Generation
-- **Coolors.co**: Use [Coolors.co](https://coolors.co/) to generate and experiment with colour
- palettes. This platform supports multiple colour spaces and includes contrast checkers, ensuring
- your colour selections meet accessibility standards.
+- **Coolors.co**: Use [Coolors.co](https://coolors.co/) to generate and
+ experiment with colour palettes. This platform supports multiple colour spaces
+ and includes contrast checkers, ensuring your colour selections meet
+ accessibility standards.
**Example**:
@@ -54,29 +55,32 @@ accessibility, and a cohesive design.
}
```
-- Adjust these values according to the design requirements, ensuring they meet accessibility
- standards like **WCAG 2.1 AA**.
+- Adjust these values according to the design requirements, ensuring they meet
+ accessibility standards like **WCAG 2.1 AA**.
---
## Accessibility Considerations
-Accessibility is a key concern, and we need to ensure that our design is inclusive and easy to
-navigate for all users, including those with visual impairments.
+Accessibility is a key concern, and we need to ensure that our design is
+inclusive and easy to navigate for all users, including those with visual
+impairments.
### 1. **WCAG 2.1 AA Compliance**
-- When designing for accessibility, ensure that colour contrast meets the **Web Content
- Accessibility Guidelines (WCAG) 2.1 AA** standards. These include:
- - A contrast ratio of at least **4.5:1** between text and background for normal text.
+- When designing for accessibility, ensure that colour contrast meets the **Web
+ Content Accessibility Guidelines (WCAG) 2.1 AA** standards. These include:
+ - A contrast ratio of at least **4.5:1** between text and background for
+ normal text.
- A contrast ratio of at least **3:1** for larger text (18pt and above).
-- Use tools like the **Color Theme Editor**, **Coolors.co contrast checker**, and the **Pilestone
- Color Blindness Simulator** to verify your designs are accessible for all users.
+- Use tools like the **Color Theme Editor**, **Coolors.co contrast checker**,
+ and the **Pilestone Color Blindness Simulator** to verify your designs are
+ accessible for all users.
### 2. **Disabling Animations and Smooth Transitions**
-- Users should have the option to disable animations and transitions to avoid distracting or
- overwhelming effects.
+- Users should have the option to disable animations and transitions to avoid
+ distracting or overwhelming effects.
**CSS Example**:
@@ -94,9 +98,9 @@ navigate for all users, including those with visual impairments.
### 3. **Hover Effects and Visual Cues**
-- Ensure that elements that change on hover, such as buttons or links, are clearly identifiable as
- interactive elements. A subtle colour change on hover is a good way to indicate that the element
- is clickable.
+- Ensure that elements that change on hover, such as buttons or links, are
+ clearly identifiable as interactive elements. A subtle colour change on hover
+ is a good way to indicate that the element is clickable.
**Example**:
@@ -113,12 +117,13 @@ navigate for all users, including those with visual impairments.
### 4. **Colour Blindness Accessibility**
- Use tools like the
- [Pilestone Color Blindness Simulator](https://pilestone.com/pages/color-blindness-simulator-1) to
- test how different users experience colours. This tool is particularly useful for ensuring that
- users with various forms of colour blindness can comfortably navigate the site.
+ [Pilestone Color Blindness Simulator](https://pilestone.com/pages/color-blindness-simulator-1)
+ to test how different users experience colours. This tool is particularly
+ useful for ensuring that users with various forms of colour blindness can
+ comfortably navigate the site.
- Ensure that contrast ratios are sufficient to make text legible for all users, particularly those
- with colour vision deficiencies.
+ Ensure that contrast ratios are sufficient to make text legible for all users,
+ particularly those with colour vision deficiencies.
---
@@ -128,7 +133,8 @@ navigate for all users, including those with visual impairments.
Use a modern, readable font to maintain consistency across the website.
-- Default fonts like `Arial`, `Roboto`, or `Gaegu` can be used for headers and body text.
+- Default fonts like `Arial`, `Roboto`, or `Gaegu` can be used for headers and
+ body text.
- Follow the existing import patterns for fonts:
```css
@@ -153,7 +159,8 @@ Use a modern, readable font to maintain consistency across the website.
}
```
-If needed, you can adjust these values in `custom.css` to fit specific design elements.
+If needed, you can adjust these values in `custom.css` to fit specific design
+elements.
---
@@ -161,7 +168,8 @@ If needed, you can adjust these values in `custom.css` to fit specific design el
### Buttons
-Buttons should use consistent styles across the website to ensure a cohesive look.
+Buttons should use consistent styles across the website to ensure a cohesive
+look.
- **Primary Button**: **Example**:
@@ -180,19 +188,20 @@ Buttons should use consistent styles across the website to ensure a cohesive loo
}
```
-If you're contributing, please review the existing styles in `custom.css` for similar elements and
-ensure your additions follow the same structure.
+If you're contributing, please review the existing styles in `custom.css` for
+similar elements and ensure your additions follow the same structure.
---
## Image and Asset Management
-- **GIFs**: General-purpose gifs should be stored in `public/gifs/`. If a gif is used in a usage
- example (tutorials, documentation), store it in `public/usage-examples-images-gifs/`.
-- **Static Images**: Images associated with tutorials or content should be stored alongside the
- `.mdx` files they relate to.
-- **Config Files**: Any configuration files generated from scripts (e.g., `games-config.json`)
- should be placed in the `public/` folder for easy access.
+- **GIFs**: General-purpose gifs should be stored in `public/gifs/`. If a gif is
+ used in a usage example (tutorials, documentation), store it in
+ `public/usage-examples-images-gifs/`.
+- **Static Images**: Images associated with tutorials or content should be
+ stored alongside the `.mdx` files they relate to.
+- **Config Files**: Any configuration files generated from scripts (e.g.,
+ `games-config.json`) should be placed in the `public/` folder for easy access.
---
@@ -200,24 +209,27 @@ ensure your additions follow the same structure.
The
[Starlight Color Theme Editor](https://starlight.astro.build/guides/css-and-tailwind/#:~:text=items%20in%20navigation.-,Color%20theme%20editor,-Use%20the%20sliders)
-is an excellent tool for previewing and adjusting colour values for various design elements across
-the website. It allows you to tweak background, text, and accent colours in real-time, providing
-instant feedback on how those changes will affect the overall design.
+is an excellent tool for previewing and adjusting colour values for various
+design elements across the website. It allows you to tweak background, text, and
+accent colours in real-time, providing instant feedback on how those changes
+will affect the overall design.
### How to Use
-- Use the sliders in the editor to adjust different colour variables, which are immediately
- reflected in the preview.
-- Once you're satisfied with your selections, you can copy the colour values and apply them to the
- `custom.css` or directly into your `.astro` components.
-- This tool is particularly helpful when used in conjunction with **Coolors.co** for creating
- palettes or the **Pilestone Color Blindness Simulator** to ensure accessibility.
+- Use the sliders in the editor to adjust different colour variables, which are
+ immediately reflected in the preview.
+- Once you're satisfied with your selections, you can copy the colour values and
+ apply them to the `custom.css` or directly into your `.astro` components.
+- This tool is particularly helpful when used in conjunction with **Coolors.co**
+ for creating palettes or the **Pilestone Color Blindness Simulator** to ensure
+ accessibility.
---
## Current Styling Values
-Here are the current values being used in the `custom.css` file for the SplashKit website:
+Here are the current values being used in the `custom.css` file for the
+SplashKit website:
```css
@import url("https://use.typekit.net/vzr3ole.css");
@@ -244,12 +256,13 @@ Please review these values when making any changes to ensure consistency.
## Useful Tools
-- **Coolors.co**: Use this platform to experiment with colour palettes and ensure your choices are
- accessible.
-- **Pilestone Color Blindness Simulator**: Test colour choices for accessibility to users with
- colour vision deficiencies.
-- **WCAG 2.1 AA Standards**: Use these standards as the baseline to ensure that the site is
- accessible to all users, particularly with respect to contrast ratios between text and background.
+- **Coolors.co**: Use this platform to experiment with colour palettes and
+ ensure your choices are accessible.
+- **Pilestone Color Blindness Simulator**: Test colour choices for accessibility
+ to users with colour vision deficiencies.
+- **WCAG 2.1 AA Standards**: Use these standards as the baseline to ensure that
+ the site is accessible to all users, particularly with respect to contrast
+ ratios between text and background.
---
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/04-search-guide.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/04-search-guide.mdx
index 83a4bbd63..1198b6164 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/04-search-guide.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/04-search-guide.mdx
@@ -6,44 +6,53 @@ sidebar:
import { Tabs, TabItem, Steps, Aside } from "@astrojs/starlight/components";
-Algolia DocSearch is a search engine designed to automatically extract content from Open Source
-documentations, allowing it to have an instant search on tech websites.
+Algolia DocSearch is a search engine designed to automatically extract content
+from Open Source documentations, allowing it to have an instant search on tech
+websites.
## Why DocSearch
- It is free and was designed especially for open source projects and technical
documentations/blogs.
- It already has built-in support for Astro Starlight.
-- It is built on Algolia Autocomplete, providing better accessiblity and customizability.
-- It offers new features to better the user experience such as search history log, a favorite
- system, and support for Google Analytics Integration.
+- It is built on Algolia Autocomplete, providing better accessiblity and
+ customizability.
+- It offers new features to better the user experience such as search history
+ log, a favorite system, and support for Google Analytics Integration.
### How to Get Started
-Registration can be done via the [DocSearch Site](https://docsearch.algolia.com/apply) where you'll
-need to enter your website URL and email. _(This process could take a few days for approval.)_ Once
-approved, you will recieve an email to accept their invitation to get started, where you'll be taken
-to the Algolia Dashboard. Further configuration can be made via this Dashboard.
+Registration can be done via the
+[DocSearch Site](https://docsearch.algolia.com/apply) where you'll need to enter
+your website URL and email. _(This process could take a few days for approval.)_
+Once approved, you will recieve an email to accept their invitation to get
+started, where you'll be taken to the Algolia Dashboard. Further configuration
+can be made via this Dashboard.
@@ -81,7 +90,8 @@ By specifying the `.first()` function, it ensures it only returns the main categ
-2. Add DocSearch APIs to the Astro Starlight `plugins` config via `astro.config.mjs`:
+2. Add DocSearch APIs to the Astro Starlight `plugins` config via
+ `astro.config.mjs`:
```js
// astro.config.mjs
@@ -112,7 +122,10 @@ By specifying the `.first()` function, it ensures it only returns the main categ
```
3. **Optional** Further configuration can be done by following guides on the
-
+
Starlight Documentation
.
@@ -121,8 +134,8 @@ By specifying the `.first()` function, it ensures it only returns the main categ
## Custom Ranking
-This following section provides a step-by-step guide to updating the custom ranking system to
-prioritize categories during search operations.
+This following section provides a step-by-step guide to updating the custom
+ranking system to prioritize categories during search operations.
### Steps
@@ -133,9 +146,11 @@ prioritize categories during search operations.
Algolia Dashboard
.
-2. Click on the **Go To Crawler** button and select the crawler you want to adjust.
+2. Click on the **Go To Crawler** button and select the crawler you want to
+ adjust.
3. Once in the crawler, go to the **Editor** via the sidebar.
-4. Add DocSearch APIs to the Astro Starlight `plugins` config via `astro.config.mjs`:
+4. Add DocSearch APIs to the Astro Starlight `plugins` config via
+ `astro.config.mjs`:
```js
actions: [
@@ -144,8 +159,11 @@ prioritize categories during search operations.
pathsToMatch: ["https://sksearchtest.netlify.app/**"],
recordExtractor: ({ $, helpers }) => {
const lvl0 =
- $('details:has(a[aria-current="page"])').find("summary").find("span").first().text() ||
- "Documentation";
+ $('details:has(a[aria-current="page"])')
+ .find("summary")
+ .find("span")
+ .first()
+ .text() || "Documentation";
return helpers.docsearch({
recordProps: {
@@ -171,22 +189,25 @@ prioritize categories during search operations.
];
```
-5. Inside the `recordProps`, you'll notice two variables: `apiBoost` and `usageBoost`. _These
- variable names can be customized to suit your needs_.
-6. The values assigned to these variables represent the boost level. The higher the number, the
- stronger the boost.
-7. To change which category receives a boost, simply adjust the name that the condition checks. For
- example, you can modify `"Developer Documentation"` to check for a different category name, such
- as `"Installation"` or `"Tutorials and Guides"`.
-8. Once completed, navigate to the **Indices** tab viable the sidebar and select the index you're
- working on.
-9. In the Index page, head to the **Configuration** tab in the top navigation bar and select
- **Ranking and Sorting**: 
-
-10. As seen in the image above, place the variables defined in the Editor earlier by clickin the **+
- Add custom ranking attribute** button. The order of priority should follow a top-to-bottom
- sequence, with the top-most being the highest priority. _Note: It is not recommended to place
- these variables above the preset textual ranking, as the preset rankings serve as the core
+5. Inside the `recordProps`, you'll notice two variables: `apiBoost` and
+ `usageBoost`. _These variable names can be customized to suit your needs_.
+6. The values assigned to these variables represent the boost level. The higher
+ the number, the stronger the boost.
+7. To change which category receives a boost, simply adjust the name that the
+ condition checks. For example, you can modify `"Developer Documentation"` to
+ check for a different category name, such as `"Installation"` or
+ `"Tutorials and Guides"`.
+8. Once completed, navigate to the **Indices** tab viable the sidebar and select
+ the index you're working on.
+9. In the Index page, head to the **Configuration** tab in the top navigation
+ bar and select **Ranking and Sorting**:
+ 
+
+10. As seen in the image above, place the variables defined in the Editor
+ earlier by clickin the **+ Add custom ranking attribute** button. The order
+ of priority should follow a top-to-bottom sequence, with the top-most being
+ the highest priority. _Note: It is not recommended to place these variables
+ above the preset textual ranking, as the preset rankings serve as the core
foundation of the search engine’s ranking system._
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/05-peer-review-web.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/05-peer-review-web.mdx
index 363f63e93..e1ba5b369 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/05-peer-review-web.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/05-peer-review-web.mdx
@@ -10,32 +10,38 @@ import { Aside } from "@astrojs/starlight/components";
-In SplashKit, peer reviews are a vital process to ensure code quality, maintainability, and
-consistency across the website development project. Every pull request (PR) must follow the
-Peer-Review Checklist, which checks for key factors like functionality, code readability, and
-documentation.
+In SplashKit, peer reviews are a vital process to ensure code quality,
+maintainability, and consistency across the website development project. Every
+pull request (PR) must follow the Peer-Review Checklist, which checks for key
+factors like functionality, code readability, and documentation.
-Additionally, the Peer-Review Prompts serve as a conversation starter for reviewers, encouraging
-collaboration while allowing for a thorough and constructive review process.
+Additionally, the Peer-Review Prompts serve as a conversation starter for
+reviewers, encouraging collaboration while allowing for a thorough and
+constructive review process.
### SplashKit Peer-Review Checklist
-The following checklist is required to be completed for every review to ensure high-quality
-contributions.
+The following checklist is required to be completed for every review to ensure
+high-quality contributions.
```plaintext
## General Information
@@ -74,81 +80,93 @@ contributions.
### SplashKit Peer-Review Prompts
-These prompts can help guide discussions during the review process and ensure that the code meets
-high standards.
-
-- **Type of Change**: Is the PR correctly identifying the type of change (bug fix, new feature,
- etc.)?
-- **Code Readability**: Is the code well-structured and easy to follow? Could better comments,
- names, or organisation improve it?
-- **Maintainability**: Is the code modular and easy to maintain? Does it introduce any technical
- debt?
-- **Code Simplicity**: Are there redundant or overly complex parts of the code that could be
- simplified?
-- **Edge Cases**: Does the code account for edge cases? What scenarios might cause it to break?
-- **Test Thoroughness**: Does the testing cover all edge cases and failure paths? Are there enough
- tests to ensure code reliability?
-- **Backward Compatibility**: Does the change break any existing functionality? If so, is backward
- compatibility handled or documented?
-- **Performance Considerations**: Could this code impact performance negatively? Can it be optimised
- while maintaining readability?
-- **Security Concerns**: Does this change introduce any security risks? Is input validation handled
- properly?
-- **Dependencies**: Are new dependencies necessary? Could they conflict with existing libraries?
- Could this functionality be achieved without new dependencies?
-- **Documentation**: Is the documentation clear and thorough enough for new developers to
- understand? Does it cover API or external interface changes?
+These prompts can help guide discussions during the review process and ensure
+that the code meets high standards.
+
+- **Type of Change**: Is the PR correctly identifying the type of change (bug
+ fix, new feature, etc.)?
+- **Code Readability**: Is the code well-structured and easy to follow? Could
+ better comments, names, or organisation improve it?
+- **Maintainability**: Is the code modular and easy to maintain? Does it
+ introduce any technical debt?
+- **Code Simplicity**: Are there redundant or overly complex parts of the code
+ that could be simplified?
+- **Edge Cases**: Does the code account for edge cases? What scenarios might
+ cause it to break?
+- **Test Thoroughness**: Does the testing cover all edge cases and failure
+ paths? Are there enough tests to ensure code reliability?
+- **Backward Compatibility**: Does the change break any existing functionality?
+ If so, is backward compatibility handled or documented?
+- **Performance Considerations**: Could this code impact performance negatively?
+ Can it be optimised while maintaining readability?
+- **Security Concerns**: Does this change introduce any security risks? Is input
+ validation handled properly?
+- **Dependencies**: Are new dependencies necessary? Could they conflict with
+ existing libraries? Could this functionality be achieved without new
+ dependencies?
+- **Documentation**: Is the documentation clear and thorough enough for new
+ developers to understand? Does it cover API or external interface changes?
---
## Review Guidelines for Specific File Types
-Different file types require different levels of attention during the review process. Here's what to
-look for when reviewing each type of file:
+Different file types require different levels of attention during the review
+process. Here's what to look for when reviewing each type of file:
### `.mdx` Files
-- **Content Accuracy**: Ensure that the content is clear and accurate. Double-check for any errors
- in the documentation or guides.
-- **Frontmatter**: Ensure the frontmatter (`title`, `description`, etc.) is correctly filled out.
-- **Component Usage**: Verify that components such as `LinkCard`, `CardGrid`, or others are being
- used appropriately within the `.mdx` files.
+- **Content Accuracy**: Ensure that the content is clear and accurate.
+ Double-check for any errors in the documentation or guides.
+- **Frontmatter**: Ensure the frontmatter (`title`, `description`, etc.) is
+ correctly filled out.
+- **Component Usage**: Verify that components such as `LinkCard`, `CardGrid`, or
+ others are being used appropriately within the `.mdx` files.
### `.css` Files
-- **Consistency**: Check that the styles align with the **Styling Guide** and maintain a consistent
- use of variables (e.g., colours, fonts, spacing).
-- **Accessibility**: Review for accessibility considerations, such as whether animations are
- disabled for users who prefer reduced motion, and whether contrast ratios meet **WCAG 2.1 AA**
- standards.
-- **Naming Conventions**: Ensure that CSS class names follow a consistent naming pattern.
+- **Consistency**: Check that the styles align with the **Styling Guide** and
+ maintain a consistent use of variables (e.g., colours, fonts, spacing).
+- **Accessibility**: Review for accessibility considerations, such as whether
+ animations are disabled for users who prefer reduced motion, and whether
+ contrast ratios meet **WCAG 2.1 AA** standards.
+- **Naming Conventions**: Ensure that CSS class names follow a consistent naming
+ pattern.
### `.jsx`/`.tsx` Files
-- **Functionality**: Make sure the interactive components (e.g., sliders, forms) work as expected
- and meet the requirements of the task.
-- **Performance**: Look for unnecessary re-renders or other performance concerns.
-- **Code Style**: Ensure the code follows **React/JSX** best practices and any project-specific
- linting rules.
+- **Functionality**: Make sure the interactive components (e.g., sliders, forms)
+ work as expected and meet the requirements of the task.
+- **Performance**: Look for unnecessary re-renders or other performance
+ concerns.
+- **Code Style**: Ensure the code follows **React/JSX** best practices and any
+ project-specific linting rules.
### `.astro` Files
-- **Structure**: Ensure the page or component is well-structured and follows the **Astro standards**
- for component and page creation.
-- **Reusability**: Look for opportunities to refactor repetitive code into reusable components.
+- **Structure**: Ensure the page or component is well-structured and follows the
+ **Astro standards** for component and page creation.
+- **Reusability**: Look for opportunities to refactor repetitive code into
+ reusable components.
---
## Useful Resources for Reviewers
-- **Starlight Documentation**: [Starlight Docs](https://starlight.astro.build/getting-started/)
-- **Astro Documentation**: [Astro Docs](https://docs.astro.build/en/getting-started/)
-- **WCAG 2.1 AA Guidelines**: [W3C Accessibility Standards](https://www.w3.org/WAI/WCAG21/quickref/)
-- **MDN CSS Documentation**: [MDN CSS Guide](https://developer.mozilla.org/en-US/docs/Web/CSS)
-- **React Documentation**: [React Official Docs](https://reactjs.org/docs/getting-started.html)
+- **Starlight Documentation**:
+ [Starlight Docs](https://starlight.astro.build/getting-started/)
+- **Astro Documentation**:
+ [Astro Docs](https://docs.astro.build/en/getting-started/)
+- **WCAG 2.1 AA Guidelines**:
+ [W3C Accessibility Standards](https://www.w3.org/WAI/WCAG21/quickref/)
+- **MDN CSS Documentation**:
+ [MDN CSS Guide](https://developer.mozilla.org/en-US/docs/Web/CSS)
+- **React Documentation**:
+ [React Official Docs](https://reactjs.org/docs/getting-started.html)
---
-By following these guidelines, you'll ensure that the SplashKit website project maintains high
-standards of code quality, performance, and accessibility. Remember, peer reviews are not only about
-verifying the code but also about learning and improving together as a team.
+By following these guidelines, you'll ensure that the SplashKit website project
+maintains high standards of code quality, performance, and accessibility.
+Remember, peer reviews are not only about verifying the code but also about
+learning and improving together as a team.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/games-showcase-template.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/games-showcase-template.mdx
index 9380173b6..973e7250d 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/games-showcase-template.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/games-showcase-template.mdx
@@ -52,7 +52,8 @@ An example may look like this:
_Briefly describe the gameplay of the game. Include any controls._
-{/* Delete this tip from the final games showcase file */} :::tip An example may look like this:
+{/* Delete this tip from the final games showcase file */} :::tip An example may
+look like this:
```markdown
| Action | Player 1 |
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/games-showcase.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/games-showcase.mdx
index 4e20e3acf..d10880904 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/games-showcase.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/Website Documentation/games-showcase.mdx
@@ -7,8 +7,9 @@ description: Guide for adding to the Games Showcase on SplashKit.io
The
[Games Showcase Template](https://github.com/thoth-tech/ThothTech-Documentation-Website/blob/main/src/content/docs/Products/SplashKit/Splashkit%20Website/Website%20Documentation/games-showcase-template.mdx)
-is to be used when adding a game to the Games Showcase on the SplashKit.io website. This will be
-included in the /src/content/docs/games folder of the SplashKit.io folder.
+is to be used when adding a game to the Games Showcase on the SplashKit.io
+website. This will be included in the /src/content/docs/games folder of the
+SplashKit.io folder.
#### Frontmatter
@@ -24,21 +25,24 @@ sidebar:
```
- **title**: Game Title
-- **description**: Short description of game here. Keep this short as it will generate the thumbnail
- in the index page.
-- **download-link**: The link to the download-directory.github.io tool to download the github
- repository that holds the game. Simply copy the link to the game parent game folder and add it to
- the end of `https://download-directory.github.io?url=`. The download-link needs to be encapsulated
- in double quotes`""`.
-- **featured**: This flag is to determine if the game is shown on the Games Index page. The Games
- Index page will automatically populate based on the frontmatter.
-- **sidebar**: Set the hidden flag to determine if the game will show up on the sidebar.
+- **description**: Short description of game here. Keep this short as it will
+ generate the thumbnail in the index page.
+- **download-link**: The link to the download-directory.github.io tool to
+ download the github repository that holds the game. Simply copy the link to
+ the game parent game folder and add it to the end of
+ `https://download-directory.github.io?url=`. The download-link needs to be
+ encapsulated in double quotes`""`.
+- **featured**: This flag is to determine if the game is shown on the Games
+ Index page. The Games Index page will automatically populate based on the
+ frontmatter.
+- **sidebar**: Set the hidden flag to determine if the game will show up on the
+ sidebar.
#### Game Gif
This will be automatically generated when the gif file is placed into the
-/public/gifs/games-showcase folder with the naming convention game-title-showcase.gif. Do not adjust
-this.
+/public/gifs/games-showcase folder with the naming convention
+game-title-showcase.gif. Do not adjust this.
#### Description
@@ -54,7 +58,8 @@ Include installation instructions that are language specific.
3. Execute `dotnet run` through the MSYS2 terminal.
```
-The Download button is automatically derived from the frontmatter - do not alter this
+The Download button is automatically derived from the frontmatter - do not alter
+this
```html
@@ -96,18 +101,21 @@ Include the date that the game was last updated.
Last updated:
```
-A Back to Games Index button will be automatically generated - do not alter this code.
+A Back to Games Index button will be automatically generated - do not alter this
+code.
```html
-
+
```
## Adding Games to the Home Page Swiper
-To add a game to the front page swiper, you will need to make sure that frontmatter in the `.mdx`
-file of your game is correct.
+To add a game to the front page swiper, you will need to make sure that
+frontmatter in the `.mdx` file of your game is correct.
```
title: The name of your game
@@ -115,19 +123,19 @@ description: The description of your game
featured: This boolean will determine if is featured on the game swiper or not
```
-A `games-config.json` file is generated by a script (feature-games.cjs), which scans the
-src/content/docs/games directory, reads the frontmatter from each .mdx file, and writes the relevant
-data to the JSON file.
+A `games-config.json` file is generated by a script (feature-games.cjs), which
+scans the src/content/docs/games directory, reads the frontmatter from each .mdx
+file, and writes the relevant data to the JSON file.
This JSON is saved to src/components/react/GameCardSwiper/games-config.json.
-The Astro project runs this script automatically during development (npm run dev) and build (npm run
-build). This ensures that the games-config.json file is always up to date without manual
-intervention.
+The Astro project runs this script automatically during development (npm run
+dev) and build (npm run build). This ensures that the games-config.json file is
+always up to date without manual intervention.
-In the React Swiper component, the project dynamically fetches the games-config.json file at
-runtime.
+In the React Swiper component, the project dynamically fetches the
+games-config.json file at runtime.
-The data from games-config.json is then used to display featured games within the Swiper carousel on
-the main page. It filters the games based on whether they are marked as featured, sorts them by
-name, and renders them dynamically.
+The data from games-config.json is then used to display featured games within
+the Swiper carousel on the main page. It filters the games based on whether they
+are marked as featured, sorts them by name, and renders them dynamically.
diff --git a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/index.mdx b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/index.mdx
index 395d86426..e8f9321d4 100644
--- a/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/index.mdx
+++ b/src/content/docs/Products/SplashKit/Documentation/Splashkit Website/index.mdx
@@ -10,9 +10,10 @@ import { Card, LinkCard, CardGrid, Icon } from "@astrojs/starlight/components";
## The SplashKit.io Website Team
-The SplashKit Website Team manages all aspects of the SplashKit.io website. This includes overseeing
-the site’s design, maintaining up-to-date tutorials and usage examples, and ensuring that all
-content is accurate, relevant, and user-friendly.
+The SplashKit Website Team manages all aspects of the SplashKit.io website. This
+includes overseeing the site’s design, maintaining up-to-date tutorials and
+usage examples, and ensuring that all content is accurate, relevant, and
+user-friendly.
The team’s responsibilities include:
@@ -20,8 +21,9 @@ The team’s responsibilities include:
- Keeping tutorials, guides, and usage examples current
- Optimising the site’s usability and accessibility for a smooth user experience
-In addition to maintenance, the team actively develops new content to expand the site’s resources,
-aiming to make SplashKit.io a comprehensive and helpful platform for users.
+In addition to maintenance, the team actively develops new content to expand the
+site’s resources, aiming to make SplashKit.io a comprehensive and helpful
+platform for users.
## Onboarding Information
diff --git a/src/content/docs/Products/SplashKit/index.mdx b/src/content/docs/Products/SplashKit/index.mdx
index d55724379..6c9237e8b 100644
--- a/src/content/docs/Products/SplashKit/index.mdx
+++ b/src/content/docs/Products/SplashKit/index.mdx
@@ -8,10 +8,16 @@ tableOfContents:
maxHeadingLevel: 4
---
-import { Card, LinkCard, CardGrid, Icon, Aside } from "@astrojs/starlight/components";
+import {
+ Card,
+ LinkCard,
+ CardGrid,
+ Icon,
+ Aside,
+} from "@astrojs/starlight/components";
-Welcome to SplashKit! Further resources detailing how to work on SplashKit are provided per-project
-in the sidebar to the left.
+Welcome to SplashKit! Further resources detailing how to work on SplashKit are
+provided per-project in the sidebar to the left.
-
+
-SplashKit.io is the primary public-facing website for the SplashKit community. It contains guides,
-tutorials, and comprehensive API documentation. This platform is designed to help users, especially
-beginners, learn how to use SplashKit effectively in their projects. As a contributor, your
-responsibilities might include writing new guides, updating existing tutorials, and ensuring the API
-documentation is clear and up-to-date.
+SplashKit.io is the primary public-facing website for the SplashKit community.
+It contains guides, tutorials, and comprehensive API documentation. This
+platform is designed to help users, especially beginners, learn how to use
+SplashKit effectively in their projects. As a contributor, your responsibilities
+might include writing new guides, updating existing tutorials, and ensuring the
+API documentation is clear and up-to-date.
#### Thoth-Tech Documentation Website
@@ -75,21 +87,26 @@ documentation is clear and up-to-date.
-The Thoth-Tech Documentation Website is an internal platform focused on providing detailed
-explanations of features and guides targeted at the Thoth-Tech team. It serves as a valuable
-resource for team members to reference when contributing to SplashKit or collaborating on projects.
-Contributors may need to update or write internal-facing documentation, particularly as new features
-or internal tools are developed.
+The Thoth-Tech Documentation Website is an internal platform focused on
+providing detailed explanations of features and guides targeted at the
+Thoth-Tech team. It serves as a valuable resource for team members to reference
+when contributing to SplashKit or collaborating on projects. Contributors may
+need to update or write internal-facing documentation, particularly as new
+features or internal tools are developed.
#### Documentation
-
+
-- Miscellaneous documentation for research, uncompleted work, or non-public reports.
-- Main Responsibility: Contributing to documents that do not fit within the other repositories.
-- Target Audience: Internal team members needing access to research reports, development notes, and
- incomplete work.
+- Miscellaneous documentation for research, uncompleted work, or non-public
+ reports.
+- Main Responsibility: Contributing to documents that do not fit within the
+ other repositories.
+- Target Audience: Internal team members needing access to research reports,
+ development notes, and incomplete work.
-The documentation repository is a 'miscellaneous' collection where all documents that do not fit
-into other repositories are stored. This might include research reports, documentation on incomplete
-or experimental features, or other internal records. Contributors might add new documentation here
-or help organize and maintain existing records to ensure they are accessible and up-to-date.
+The documentation repository is a 'miscellaneous' collection where all documents
+that do not fit into other repositories are stored. This might include research
+reports, documentation on incomplete or experimental features, or other internal
+records. Contributors might add new documentation here or help organize and
+maintain existing records to ensure they are accessible and up-to-date.
### SplashKit Development
#### SplashKit Core
-
+
-Contributing to SplashKit Core means working directly on the foundation of the SDK. This involves
-core functionalities like rendering, audio management, input handling, and more. As a team member,
-you’ll focus on adding new features, fixing bugs, and ensuring the overall performance of the SDK
-remains high.
+Contributing to SplashKit Core means working directly on the foundation of the
+SDK. This involves core functionalities like rendering, audio management, input
+handling, and more. As a team member, you’ll focus on adding new features,
+fixing bugs, and ensuring the overall performance of the SDK remains high.
-While the core is primarily built in C++, you’ll be contributing to a codebase that supports
-translation into other languages like C#, Python, and Pascal. Although the translation process is
-largely automated, maintaining the quality and performance of the core features will be your main
-responsibility. Cross-platform development is central to SplashKit, so you'll be handling different
-OS nuances as you work on features that need to run smoothly on Windows, macOS, and Linux.
+While the core is primarily built in C++, you’ll be contributing to a codebase
+that supports translation into other languages like C#, Python, and Pascal.
+Although the translation process is largely automated, maintaining the quality
+and performance of the core features will be your main responsibility.
+Cross-platform development is central to SplashKit, so you'll be handling
+different OS nuances as you work on features that need to run smoothly on
+Windows, macOS, and Linux.
#### SplashKit Manager (SKM)
@@ -151,81 +176,101 @@ OS nuances as you work on features that need to run smoothly on Windows, macOS,
-The SplashKit Manager (SKM) is a command-line interface (CLI) and application tool designed to
-simplify the development workflow with SplashKit. It automates tasks such as project setup,
-dependency management, compilation, and running code across different platforms. Contributions to
-SKM will primary be to ensure it stays aligned with updates and new features in SplashKit Core. For
-example, you’ll ensure that new dependencies or libraries introduced in the core SDK are correctly
-included and managed across all supported platforms.
-
-Beyond core synchronisation, there may also be opportunities to improve the overall user experience
-by streamlining project initialization, refining dependency management, or adding tools that make
-the workflow more efficient for developers. However, your primary focus will be ensuring SKM
-seamlessly integrates with the evolving features and requirements of the core SplashKit SDK.
+The SplashKit Manager (SKM) is a command-line interface (CLI) and application
+tool designed to simplify the development workflow with SplashKit. It automates
+tasks such as project setup, dependency management, compilation, and running
+code across different platforms. Contributions to SKM will primary be to ensure
+it stays aligned with updates and new features in SplashKit Core. For example,
+you’ll ensure that new dependencies or libraries introduced in the core SDK are
+correctly included and managed across all supported platforms.
+
+Beyond core synchronisation, there may also be opportunities to improve the
+overall user experience by streamlining project initialization, refining
+dependency management, or adding tools that make the workflow more efficient for
+developers. However, your primary focus will be ensuring SKM seamlessly
+integrates with the evolving features and requirements of the core SplashKit
+SDK.
#### SplashKit Translator
-
+
-The SplashKit Translator automates the process of translating SplashKit’s core C++ functionality
-into other supported languages like C#, Python, and Pascal. Since this process is largely automated
-using ERB templating, there’s minimal need for direct contributions here.
+The SplashKit Translator automates the process of translating SplashKit’s core
+C++ functionality into other supported languages like C#, Python, and Pascal.
+Since this process is largely automated using ERB templating, there’s minimal
+need for direct contributions here.
-Your focus will remain on maintaining and enhancing the core SDK, as changes there will naturally
-propagate through the automated translation process. Ensuring that core features are implemented
-cleanly and efficiently in C++ will help maintain consistency across languages.
+Your focus will remain on maintaining and enhancing the core SDK, as changes
+there will naturally propagate through the automated translation process.
+Ensuring that core features are implemented cleanly and efficiently in C++ will
+help maintain consistency across languages.
#### SplashKit Online
-
+
-SplashKit Online is a web-based IDE designed to help beginner programmers quickly start building 2D
-games directly in the browser. While it currently supports JavaScript (with experimental C++
-functionality) and leverages WebAssembly (Wasm) to execute SplashKit code, the goal is to expand
-this support to include all languages that SplashKit supports: C#, Python, and Pascal.
-
-As a contributor to SplashKit Online, your primary responsibility will be developing and integrating
-full support for these languages, allowing users to write and run code in C#, Python, and Pascal
-seamlessly within the browser-based environment. This will involve extending the IDE's functionality
-to handle language-specific nuances and ensuring that WebAssembly can execute code from these
-languages efficiently.
-
-Your work will also include improving the user experience, making the platform more intuitive and
-accessible for users, and ensuring that the transition between languages is smooth. This could
-involve building better language selection interfaces, optimizing performance for different
-languages, and adding language-specific tools or debugging features.
+SplashKit Online is a web-based IDE designed to help beginner programmers
+quickly start building 2D games directly in the browser. While it currently
+supports JavaScript (with experimental C++ functionality) and leverages
+WebAssembly (Wasm) to execute SplashKit code, the goal is to expand this support
+to include all languages that SplashKit supports: C#, Python, and Pascal.
+
+As a contributor to SplashKit Online, your primary responsibility will be
+developing and integrating full support for these languages, allowing users to
+write and run code in C#, Python, and Pascal seamlessly within the browser-based
+environment. This will involve extending the IDE's functionality to handle
+language-specific nuances and ensuring that WebAssembly can execute code from
+these languages efficiently.
+
+Your work will also include improving the user experience, making the platform
+more intuitive and accessible for users, and ensuring that the transition
+between languages is smooth. This could involve building better language
+selection interfaces, optimizing performance for different languages, and adding
+language-specific tools or debugging features.
#### Arcade Machines
-
+
-The Arcade Machines on Deakin Campuses use emulationstation, retropie, and a custom SplashKit
-application to run games built using the SplashKit SDK. These machines offer students the
-opportunity to upload their games and test them in a real-world arcade environment. The machines are
-designed to help students see their creations in action on physical hardware, making for a hands-on
-experience that bridges the gap between development and arcade-style game deployment.
+The Arcade Machines on Deakin Campuses use emulationstation, retropie, and a
+custom SplashKit application to run games built using the SplashKit SDK. These
+machines offer students the opportunity to upload their games and test them in a
+real-world arcade environment. The machines are designed to help students see
+their creations in action on physical hardware, making for a hands-on experience
+that bridges the gap between development and arcade-style game deployment.
#### Game Development
-
-
+
+
-The Game Development team is a small, focused group that produces games designed to highlight the
-capabilities of SplashKit. These games follow industry-standard design patterns and practices to
-ensure they are polished and well-structured. The goal is to showcase what can be achieved using
-SplashKit while maintaining professional standards in game design and development. These projects
-serve as both a demonstration of SplashKit’s features and an inspiration for developers using the
-SDK to build their own games.
+The Game Development team is a small, focused group that produces games designed
+to highlight the capabilities of SplashKit. These games follow industry-standard
+design patterns and practices to ensure they are polished and well-structured.
+The goal is to showcase what can be achieved using SplashKit while maintaining
+professional standards in game design and development. These projects serve as
+both a demonstration of SplashKit’s features and an inspiration for developers
+using the SDK to build their own games.
diff --git a/src/content/docs/Resources/Onboarding Hub/ontrack.mdx b/src/content/docs/Resources/Onboarding Hub/ontrack.mdx
index 917a2122f..def8d55de 100644
--- a/src/content/docs/Resources/Onboarding Hub/ontrack.mdx
+++ b/src/content/docs/Resources/Onboarding Hub/ontrack.mdx
@@ -8,40 +8,46 @@ import { Steps, LinkCard, CardGrid } from "@astrojs/starlight/components";
## Contributing to OnTrack
-Contributing to OnTrack is a great way to enhance a student-focused learning and feedback platform
-while gaining experience in development workflows. Whether you’re interested in adding new features,
-fixing bugs, improving documentation, or optimizing user experience, we welcome all contributions!
+Contributing to OnTrack is a great way to enhance a student-focused learning and
+feedback platform while gaining experience in development workflows. Whether
+you’re interested in adding new features, fixing bugs, improving documentation,
+or optimizing user experience, we welcome all contributions!
### Trimester Workflow
-1. **Explore OnTrack**: Begin by exploring the various OnTrack resources on this website. See below
- for links. Familiarize yourself with the structure and functionality of each project and its
- repositories and contribution guides.
-2. **Choose Tasks**: Work with the team or your mentor to identify tasks you can complete. These may
- range from feature development and bug fixes to documentation improvements.
-3. **Fork the Repository**: When contributing, be sure to fork from the Thoth-Tech repo. This
- ensures changes are first reviewed and integrated internally before being merged into the main
- project.
-4. **Follow the Contribution Guide**: If a repository has its own contribution guide, usually in a
- CONTRIBUTING.md file, then this guide should be followed. These will provide specific guidelines
- to set up environments to work on particular projects. If you are unsure, reach out to fellow
- team members and your mentor for further guidance.
-5. **Make Changes**: Begin working on your chosen task. Be sure to follow the repository's
- guidelines and document your work clearly.
-6. **Submit a Pull Request (PR)**: Use the provided PR template (if available) to submit your work.
- Clearly explain your changes, and explain the context and reasoning behind your changes. Ensure
- your code is well-tested and documented.
-7. **Peer Review**: All contributions are subject to peer review. This is an opportunity to
- collaborate with other developers, improve the quality of your code, and ensure that it adheres
- to project standards. Peer reviews will involve a list of tasks that you are expected to review,
- but they are also expected to be in the form of a discussion which aims to produce the best
+1. **Explore OnTrack**: Begin by exploring the various OnTrack resources on this
+ website. See below for links. Familiarize yourself with the structure and
+ functionality of each project and its repositories and contribution guides.
+2. **Choose Tasks**: Work with the team or your mentor to identify tasks you can
+ complete. These may range from feature development and bug fixes to
+ documentation improvements.
+3. **Fork the Repository**: When contributing, be sure to fork from the
+ Thoth-Tech repo. This ensures changes are first reviewed and integrated
+ internally before being merged into the main project.
+4. **Follow the Contribution Guide**: If a repository has its own contribution
+ guide, usually in a CONTRIBUTING.md file, then this guide should be followed.
+ These will provide specific guidelines to set up environments to work on
+ particular projects. If you are unsure, reach out to fellow team members and
+ your mentor for further guidance.
+5. **Make Changes**: Begin working on your chosen task. Be sure to follow the
+ repository's guidelines and document your work clearly.
+6. **Submit a Pull Request (PR)**: Use the provided PR template (if available)
+ to submit your work. Clearly explain your changes, and explain the context
+ and reasoning behind your changes. Ensure your code is well-tested and
+ documented.
+7. **Peer Review**: All contributions are subject to peer review. This is an
+ opportunity to collaborate with other developers, improve the quality of your
+ code, and ensure that it adheres to project standards. Peer reviews will
+ involve a list of tasks that you are expected to review, but they are also
+ expected to be in the form of a discussion which aims to produce the best
changes possible.
-8. **Mentor Review**: After peer review, your mentor will review the changes for final approval
- before they are merged.
-9. **Merging**: Contributions are typically merged into the main project repository at the end of a
- development trimester, ensuring stability and quality.
+8. **Mentor Review**: After peer review, your mentor will review the changes for
+ final approval before they are merged.
+9. **Merging**: Contributions are typically merged into the main project
+ repository at the end of a development trimester, ensuring stability and
+ quality.
diff --git a/src/content/docs/Resources/Onboarding Hub/splashkit-onboarding-doc.mdx b/src/content/docs/Resources/Onboarding Hub/splashkit-onboarding-doc.mdx
index 43713e96b..2068f7f29 100644
--- a/src/content/docs/Resources/Onboarding Hub/splashkit-onboarding-doc.mdx
+++ b/src/content/docs/Resources/Onboarding Hub/splashkit-onboarding-doc.mdx
@@ -6,41 +6,48 @@ import { Steps, LinkCard, CardGrid } from "@astrojs/starlight/components";
## Contributing to SplashKit
-Contributing to SplashKit is a great way to help improve a powerful, beginner-friendly game
-development toolkit while gaining experience in open-source development. Whether you’re interested
-in adding new features, fixing bugs, improving documentation, or helping with testing, we welcome
-all kinds of contributions!
+Contributing to SplashKit is a great way to help improve a powerful,
+beginner-friendly game development toolkit while gaining experience in
+open-source development. Whether you’re interested in adding new features,
+fixing bugs, improving documentation, or helping with testing, we welcome all
+kinds of contributions!
### Trimester Workflow
-1. **Explore SplashKit**: Begin by exploring the various SplashKit resources on this website. See
- below for links. Familiarize yourself with the structure and functionality of each project and
- its repositories and contribution guides.
-2. **Choose Tasks**: Work with the team or your mentor to identify tasks you can complete. These may
- range from feature development and bug fixes to documentation improvements.
-3. **Fork the Repository**: When contributing, be sure to fork from the Thoth-Tech repo, not the
- upstream SplashKit repo. This ensures changes are first reviewed and integrated internally before
- being merged upstream.
-4. **Follow the Contribution Guide**: If a repository has its own contribution guide, usually in a
- CONTRIBUTING.md file, then this guide should be followed. These will provide specific guidelines
- to setup environments to work on particular projects. If you are unsure, reach out to fellow team
- members and your mentor for further guidance.
-5. **Make Changes**: Begin working on your chosen task. Be sure to follow the repository's
- guidelines and document your work clearly.
-6. **Submit a Pull Request (PR)**: Use the provided PR template (if available) to submit your work.
- Clearly explain your changes, and explain the context and reasoning behind your changes. Ensure
- your code is well-tested and documented.
-7. **Peer Review**: All contributions are subject to peer review. This is an opportunity to
- collaborate with other developers, improve the quality of your code, and ensure that it adheres
- to project standards. Peer reviews will involve a list of tasks that you are expected to review,
- but they are also expected to be in the form of a discussion which aims to produce the best
+1. **Explore SplashKit**: Begin by exploring the various SplashKit resources on
+ this website. See below for links. Familiarize yourself with the structure
+ and functionality of each project and its repositories and contribution
+ guides.
+2. **Choose Tasks**: Work with the team or your mentor to identify tasks you can
+ complete. These may range from feature development and bug fixes to
+ documentation improvements.
+3. **Fork the Repository**: When contributing, be sure to fork from the
+ Thoth-Tech repo, not the upstream SplashKit repo. This ensures changes are
+ first reviewed and integrated internally before being merged upstream.
+4. **Follow the Contribution Guide**: If a repository has its own contribution
+ guide, usually in a CONTRIBUTING.md file, then this guide should be followed.
+ These will provide specific guidelines to setup environments to work on
+ particular projects. If you are unsure, reach out to fellow team members and
+ your mentor for further guidance.
+5. **Make Changes**: Begin working on your chosen task. Be sure to follow the
+ repository's guidelines and document your work clearly.
+6. **Submit a Pull Request (PR)**: Use the provided PR template (if available)
+ to submit your work. Clearly explain your changes, and explain the context
+ and reasoning behind your changes. Ensure your code is well-tested and
+ documented.
+7. **Peer Review**: All contributions are subject to peer review. This is an
+ opportunity to collaborate with other developers, improve the quality of your
+ code, and ensure that it adheres to project standards. Peer reviews will
+ involve a list of tasks that you are expected to review, but they are also
+ expected to be in the form of a discussion which aims to produce the best
changes possible.
-8. **Mentor Review**: After peer review, your mentor will review the changes for final approval
- before they are merged.
-9. **Merging**: Contributions are typically merged upstream at the end of a development trimester,
- ensuring the stability and quality of the SplashKit project.
+8. **Mentor Review**: After peer review, your mentor will review the changes for
+ final approval before they are merged.
+9. **Merging**: Contributions are typically merged upstream at the end of a
+ development trimester, ensuring the stability and quality of the SplashKit
+ project.
@@ -57,13 +64,21 @@ all kinds of contributions!
description=""
/>
-
+
-
+
- Jasmine is used as a unit testing framework
[https://jasmine.github.io/](https://jasmine.github.io/)
-- Cypress is used for end-to-end testing [https://www.cypress.io/](https://www.cypress.io/)
+- Cypress is used for end-to-end testing
+ [https://www.cypress.io/](https://www.cypress.io/)
- Karma is used for testing automation
[https://karma-runner.github.io/latest/index.html](https://karma-runner.github.io/latest/index.html)
- App is built using Node.js: [http://nodejs.org/](http://nodejs.org/)
## QA Deliverables\*\*
-What artifacts QA will provide to the team (eg, Test Strategy, Sample Test Plan, Bug reports)
+What artifacts QA will provide to the team (eg, Test Strategy, Sample Test Plan,
+Bug reports)
### \*\*Examples
@@ -40,15 +43,16 @@ What artifacts QA will provide to the team (eg, Test Strategy, Sample Test Plan,
## Test Management
-What resources are used to carry out testing in terms of tooling, environments, supported platforms
-and versions, and test data
+What resources are used to carry out testing in terms of tooling, environments,
+supported platforms and versions, and test data
### Examples
-- Jenkins is used to build test versions of the application off of master and PRs
+- Jenkins is used to build test versions of the application off of master and
+ PRs
- VMs are used to test the applications in Windows
-- Test runs are input in Testpad to make it clear what scenarios were tested and if those scenarios
- pass or fail
+- Test runs are input in Testpad to make it clear what scenarios were tested and
+ if those scenarios pass or fail
- Supported operating systems are Windows 7 and 10 and Mac
- Test data will include user accounts
diff --git a/src/content/docs/Resources/Quality Assurance/git-contributions-guide.md b/src/content/docs/Resources/Quality Assurance/git-contributions-guide.md
index 03ea2ee57..9bfc3fff3 100644
--- a/src/content/docs/Resources/Quality Assurance/git-contributions-guide.md
+++ b/src/content/docs/Resources/Quality Assurance/git-contributions-guide.md
@@ -14,24 +14,25 @@ sidebar:
## Contributing to Repositories: How To
-Repositories are where existing Thoth Tech code is stored, and where new code contributions, once
-tested and approved, will ultimately be merged.
+Repositories are where existing Thoth Tech code is stored, and where new code
+contributions, once tested and approved, will ultimately be merged.
To begin working on your project, follow these steps:
### If you have yet to cloned the repository to your local machine
-- **Clone the Repository**: Clone your project's relevant Thoth Tech repository to your local
- machine:
+- **Clone the Repository**: Clone your project's relevant Thoth Tech repository
+ to your local machine:
```shell
git clone
```
-- **Navigate to the created project folder**. You will be on the default branch (main/master).
+- **Navigate to the created project folder**. You will be on the default branch
+ (main/master).
### If you have already cloned the repository to your local machine
-- **Update Your Local Copy**: Ensure you are on the main/master branch and pull the latest changes
- from the origin:
+- **Update Your Local Copy**: Ensure you are on the main/master branch and pull
+ the latest changes from the origin:
```shell
git checkout main
@@ -47,7 +48,8 @@ _Then:_
git checkout -b
```
-- **Make Your Changes:** Implement your code changes on the newly created branch.
+- **Make Your Changes:** Implement your code changes on the newly created
+ branch.
- **Commit Your Changes:** Commit your changes using the format provided in the
[Commit Guidelines](#commit-guidelines).
@@ -62,27 +64,30 @@ _Then:_
git push origin
```
-- **Create a Draft Pull Request:** Create a [Draft Pull Request](#draft-pull-request) to merge your
- branch into the main Thoth Tech branch for your repository. Add
- [Required Approvals](#required-approvals) (note: it will be blocked from merging while in draft
- form). Comment on the progress and any feedback sought.
+- **Create a Draft Pull Request:** Create a
+ [Draft Pull Request](#draft-pull-request) to merge your branch into the main
+ Thoth Tech branch for your repository. Add
+ [Required Approvals](#required-approvals) (note: it will be blocked from
+ merging while in draft form). Comment on the progress and any feedback sought.
-- **Continue Development:** Keep making changes on your local branch, committing and pushing until
- you are satisfied that the code meets all tests, acceptance criteria, and is ready for merging.
+- **Continue Development:** Keep making changes on your local branch, committing
+ and pushing until you are satisfied that the code meets all tests, acceptance
+ criteria, and is ready for merging.
-- **Publish Your Pull Request:** Change the status of your Pull Request to "Ready for Review" to
- finalise it.
+- **Publish Your Pull Request:** Change the status of your Pull Request to
+ "Ready for Review" to finalise it.
For an example sequence of git commands used in this process, refer to the
[Git Workflow Summary](#git-workflow-summary).
## Branching Guidelines
-No commits should be made directly to the default branch (usually main/master/develop). Instead,
-branches should be created off the default branch to encompass any changes.
+No commits should be made directly to the default branch (usually
+main/master/develop). Instead, branches should be created off the default branch
+to encompass any changes.
-Branch names must be descriptive and include a reference to the task or subtask number the work
-relates to, following this format:
+Branch names must be descriptive and include a reference to the task or subtask
+number the work relates to, following this format:
| Branch Naming Format | Use |
| ------------------------------------------------------ | ---------------------------------------------- |
@@ -110,8 +115,8 @@ _Tasks:_
3.1 _subtask..._
-A programmer starting work on the Voice Verification component subtask 1.2 should use a branch
-named: `feature/voice-verification-1.2-store-voice-input`.
+A programmer starting work on the Voice Verification component subtask 1.2
+should use a branch named: `feature/voice-verification-1.2-store-voice-input`.
This branch can be created and checked out using the git command:
@@ -121,9 +126,11 @@ git checkout -b feature/voice-verification-1.2-store-voice-input
## Commit Guidelines
-Thoth Tech follows the Git commit message format required by the Doubtfire LMS (see
+Thoth Tech follows the Git commit message format required by the Doubtfire LMS
+(see
[doubtfire-lms's CONTRIBUTING.md](https://github.com/doubtfire-lms/doubtfire-deploy/blob/development/CONTRIBUTING.md#commit-message-format)),
-which this section mirrors. This format makes for an easier-to-read and more useful commit history.
+which this section mirrors. This format makes for an easier-to-read and more
+useful commit history.
### Message Format
@@ -137,11 +144,12 @@ Each commit message consists of a header, a body, and a footer.