@@ -227,4 +227,137 @@ and a transparent relationship with us.
227
227
If you have any some common questions or need further information, see our
228
228
[ Billing FAQs] ( billing-faqs.md ) for comprehensive answers.
229
229
230
+ ## SU Conservation - How to Save Cost?
231
+
232
+ With SUs being the primary metric for resource consumption, it's crucial to actively
233
+ manage your workloads when they're not in use.
234
+
235
+ Below are practical ways to conserve SUs across different NERC services:
236
+
237
+ ### NERC OpenStack
238
+
239
+ Once you're logged in to [ NERC's Horizon dashboard] ( https://stack.nerc.mghpcc.org ) .
240
+
241
+ Navigate: Project -> Compute -> Instances.
242
+
243
+ After launching an instance (On the left side bar, click on
244
+ _ Project -> Compute -> Instances_ ), several options are available under the
245
+ Actions menu located on the right hand side of your screen as shown here:
246
+
247
+ ![ Instance Management Actions] ( images/instance_actions.png )
248
+
249
+ ** Shelve your VM when not in use** :
250
+
251
+ In [ NERC OpenStack] ( ../../openstack/index.md ) , if your VM does not need to run
252
+ continuously, you can ** shelve** it to free up consumed resources such as vCPUs,
253
+ RAM, and disk. This action releases all allocated resources while preserving the
254
+ VM's state.
255
+
256
+ - Click _ Action -> Shelve Instance_ .
257
+
258
+ - Releases all computing resources (i.e., vCPU, RAM, and disk).
259
+
260
+ - We strongly recommend detaching volumes before shelving.
261
+
262
+ - Status will change to ` Shelved Offloaded ` .
263
+
264
+ You can later ** unshelve** the VM without needing to recreate it - allowing you
265
+ to reduce costs without losing any progress.
266
+
267
+ - To unshelve the instance, click _ Action -> Unshelve Instance_ .
268
+
269
+ For more details on * shelving a VM* , see the explanation [ here] ( ../../openstack/management/vm-management.md#instance-management-actions ) .
270
+
271
+ ### NERC OpenShift
272
+
273
+ ** Scale your pods to 0 replicas** :
274
+
275
+ In [ NERC OpenShift] ( ../../openshift/index.md ) , if your application or job is idle,
276
+ you can scale its pod replica count to ** 0** . This effectively frees up compute
277
+ resources (CPU, GPU, and RAM) while retaining the configuration, environment settings,
278
+ and persistent volume claims (PVCs) for future use.
279
+
280
+ #### Using Web Console
281
+
282
+ 1 . Go to the [ NERC's OpenShift Web Console] ( https://console.apps.shift.nerc.mghpcc.org ) .
283
+
284
+ 2 . Click on the ** Perspective Switcher** drop-down menu and select ** Developer** .
285
+
286
+ 3 . Click the pod or application you want to scale to see the _ Overview_ panel to
287
+ the right.
288
+
289
+ 4 . In the ** Details** tab (usually the * default* tab when you open the deployment):
290
+
291
+ 5 . Look for the Pod count or Replicas section.
292
+
293
+ 6 . Use the up/down arrows next to the number to adjust the replica count.
294
+
295
+ 7 . Set it to ** 0** by clicking down arrow as shown below:
296
+
297
+ ![ Scaling Pod to 0] ( images/scale-0-pod.png )
298
+
299
+ 8 . OpenShift will automatically scale down the pods to 0.
300
+
301
+ When you need to run your application again, you can scale up the pod count or
302
+ replicas to reclaim the necessary resources.
303
+
304
+ #### Using the OpenShift ` oc ` CLI
305
+
306
+ ##### Prerequisite
307
+
308
+ - Install and configure the ** OpenShift CLI (oc)** , see [ How to Setup the
309
+ OpenShift CLI Tools] ( ../../openshift/logging-in/setup-the-openshift-cli.md )
310
+ for more information.
311
+
312
+ !!! info "Information"
313
+
314
+ Some users may have access to multiple projects. Run the following command to
315
+ switch to a specific project space: `oc project <your-project-namespace>`.
316
+
317
+ Please confirm the correct project is being selected by running ` oc project ` ,
318
+ as shown below:
319
+
320
+ oc project
321
+ Using project "<your_openshift_project_where_pod_deployed>" on server "https://api.shift.nerc.mghpcc.org:6443".
322
+
323
+ If your application or job is idle, you can scale your pod's replica count to
324
+ ** 0** by running the following ` oc ` command:
325
+
326
+ ``` sh
327
+ oc scale deployment < your-deployment> --replicas=0
328
+ ```
329
+
330
+ When you need to run your application again, you can scale up the pod count or
331
+ replicas to reclaim the necessary resources by running:
332
+
333
+ ``` sh
334
+ oc scale deployment < your-deployment> --replicas=1
335
+ ```
336
+
337
+ ### NERC RHOAI
338
+
339
+ ** Toggle the Workbench to "Stopped"** :
340
+
341
+ In [ NERC Red Hat OpenShift AI (RHOAI)] ( ../../openshift-ai/index.md ) , workbench
342
+ environments can be toggled between ** Running** and ** Stopped** states.
343
+
344
+ 1 . Go to the [ NERC's OpenShift Web Console] ( https://console.apps.shift.nerc.mghpcc.org ) .
345
+
346
+ 2 . After logging in to the NERC OpenShift console, access the NERC's Red Hat OpenShift
347
+ AI dashboard by clicking the application launcher icon (the black-and-white
348
+ icon that looks like a grid), located on the header.
349
+
350
+ 3 . When you've completed a workload such as model development or experimentation
351
+ using the [ Data Science Project (DSP)] ( ../../openshift-ai/data-science-project/using-projects-the-rhoai.md )
352
+ ** Workbench** , you can stop the compute resources by toggling the status from
353
+ ** Running** to ** Stopped** , as shown below:
354
+
355
+ 
356
+
357
+ This action immediately releases the compute resources allocated to the notebook
358
+ environment within the Workbench setup.
359
+
360
+ When you need to run your workbench again, just toggle its status back from
361
+ **Stopped** to **Running**.
362
+
230
363
---
0 commit comments