Skip to content

AI review make target #1817

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: master
Choose a base branch
from
Open

AI review make target #1817

wants to merge 9 commits into from

Conversation

kaovilai
Copy link
Member

@kaovilai kaovilai commented Jun 30, 2025

Why the changes were made

This PR introduces AI-powered code review capabilities for the OADP operator project, with two main features:

1. AI Review Make Target

New make target ai-review-gptme-ollama that enables automated code review using AI models locally with gptme and Ollama backend. This feature provides enhanced review capabilities with tools support.

This helps developers:

  • Get instant AI-powered code review feedback on staged changes
  • Run reviews locally without external dependencies or API keys
  • Catch potential issues early in the development process
  • Improve code quality with automated suggestions
  • Access enhanced review features through gptme tools (file reading, browsing when lynx is available)

2. Configurable AI Review Prompts

The AI review prompt is now externalized and configurable:

  • Default prompt moved to ai/Makefile/Prompt/prompt.example
  • Users can create custom prompts by copying to ai/Makefile/Prompt/prompt
  • Custom prompt file is gitignored to prevent accidental commits
  • Makefile automatically uses custom prompt if present, falls back to example

The implementation uses Ollama running in a container to provide AI analysis of git diffs, focusing on:

  • Code quality and best practices
  • Potential bugs or issues
  • Go idioms and conventions
  • Kubernetes/OpenShift operator patterns
  • Security concerns

How to test the changes made

Prerequisites:

  • gptme installed (pip install gptme)
  • Podman installed and running

Running AI Review:

  1. Stage your changes:

    git add <your-files>
  2. Run the AI review with gptme:

    make ai-review-gptme-ollama

Custom Prompts:

  1. Create a custom prompt:

    cp ai/Makefile/Prompt/prompt.example ai/Makefile/Prompt/prompt
    # Edit ai/Makefile/Prompt/prompt to customize
  2. Run AI review with your custom prompt:

    make ai-review-gptme-ollama

The command will:

  • Start an Ollama container if not already running
  • Pull the specified model (default: gemma3:12b, ~11GB on first run)
  • Analyze your staged changes using gptme with Ollama backend
  • Provide detailed feedback on code quality, potential issues, and suggestions
  • Enable tools like file reading and browsing (if lynx is installed) for enhanced context

Options:

  • OLLAMA_MODEL=<model>: Use a different Ollama model (default: gemma3:12b)
  • OLLAMA_MEMORY=<size>: Set memory limit for Ollama container (default: 11g)
  • DEBUG=1: Enable debug output

Example output:

❯ make ai-review-gptme-ollama
Using Container Tool: docker
Ollama not detected, starting container...
99221d07d90a1cdcf597c90f6c72feff8b237752a3ef8ec66f7d67224282e2c4
Waiting for Ollama to be ready...
Ollama is ready\!
Ensuring gemma3:12b model is available...
[... model download ...]
Running gptme with Ollama backend...
[... detailed AI review output with enhanced context ...]

Available Models:

Small models (< 2GB memory):

  • llama3.2:1b
  • phi3:mini
  • tinyllama

Medium models (4-8GB memory):

  • llama3.2:3b
  • gemma2:2b
  • gemma3n:e4b (~7GB)
  • gemma3n:e2b

Larger models (8GB+ memory):

  • llama3.1:8b
  • mistral
  • gemma3:12b (11GB, default)

Signed-off-by: Tiger Kaovilai [email protected]

Example run

```
❯ make ai-review-ollama
Using Container Tool: docker
bash: /Users/tkaovila/oadp-operator-opencode/bin/operator-sdk: No such file or directory
bash: /Users/tkaovila/oadp-operator-opencode/bin/opm: No such file or directory
Ollama not detected, starting container...
99221d07d90a1cdcf597c90f6c72feff8b237752a3ef8ec66f7d67224282e2c4
Waiting for Ollama to be ready...
Ollama is ready!
Ensuring gemma3n:e4b model is available...
pulling manifest
pulling 38e8dcc30df4: 100% ▕██████████████████▏ 7.5 GB
pulling e0a42594d802: 100% ▕██████████████████▏  358 B
pulling 1adbfec9dcf0: 100% ▕██████████████████▏ 8.4 KB
pulling 8eac5d7750c5: 100% ▕██████████████████▏  491 B
verifying sha256 digest
writing manifest
success
Reviewing staged changes with Ollama using gemma3n:e4b...
Preparing request...
Sending request to Ollama API...
## Review of the Git Diff for OADP/Makefiile

This review focuses on the provided `Makefiile` diff, specifically addressing code quality, potential bugs, Go idioms, Kubernetes/OpenShift operator patterns, and security concerns.

### 1. Code Quality and Best Practices

* **Clear Comments:** The comments in the `Makefiile` are generally good, explaining the purpose of sections and individual commands. The use of `define` for the AI prompt is a good practice for maintainability.
* **Consistent Formatting:** The code is reasonably well-formatted, making it readable.
* **Use of Variables:** Using variables like `OLLAMA_MODEL` and `OLLAMA_MEMORY` improves readability and makes configuration easier.
* **Error Handling:** The script includes basic error handling (e.g., checking for Ollama availability, handling errors from `poiman` and `curl`).
* **Debug Flag:** The `DEBUG` flag is a useful addition for troubleshooting.

### 2. Potential Bugs or Issues

* **Race Condition with Ollama:** While the script checks if Ollama is running, there's a potential race condition. If the script starts the Ollama container and then immediately proceeds to pull the model, there might be a brief period where the container isn't fully ready. This could lead to errors during the model pull.
* **Error Handling for `jq`:** The script uses `jq` without explicitly checking its exit code. If `jq` fails for any reason, the script might continue with potentially incorrect data.
* **`poiman ps` Dependency:** The script relies on `poiman ps` being available. This might not be the case in all environments. Consider providing a fallback mechanism or checking for the availability of `poiman`.
* **Potential for Long-Running `curl`:** The `curl` command for pulling the model has a timeout of 300 seconds. If the model is very large or the network connection is slow, this timeout might be insufficient.

### 3. Go Idioms and Conventions

* **Shell Scripting:** The `Makefiile` is written in shell scripting, which is a common practice for build systems. However, for more complex logic, consider using Go for better maintainability and testability.
* **Command Chaining:** The script uses command chaining (`&&`) effectively to ensure commands are executed sequentially.
* **Variable Substitution:** Variable substitution (`$$`) is used correctly.

### 4. Kubernetes/OpenShift Operator Patterns

* **Build Process:** The `Makefiile` defines a build process for the OADP operator, which is a standard pattern for Kubernetes/OpenShift operators.
* **Dependency Management:** The script implicitly manages dependencies by ensuring that the necessary tools (like `poiman` and `jq`) are installed.

### 5. Security Concerns

* **Ollama API Access:** The script interacts with the Ollama API. Ensure that the API is properly secured and that access is restricted to authorized users.
* **Sensitive Information:** Avoid hardcoding sensitive information (like API keys) in the `Makefiile`. Consider using environment variables or a secrets management solution.
* **Input Validation:** The script doesn't explicitly validate the `OLLAMA_MODEL` variable. Ensure that the provided model name is valid and doesn't lead to any security vulnerabilities.

## Actionable Feedback

1. **Address Potential Race Condition:** Introduce a short delay or a retry mechanism before attempting to pull the model after starting the Ollama container.
2. **Add `jq` Error Handling:** Check the exit code of `jq` commands and handle errors appropriately.
3. **Consider Fallback for `poiman ps`:** Provide a fallback mechanism or check for the availability of `poiman` before using it.
4. **Review `curl` Timeout:** Evaluate if the 300-second timeout for the model pull is sufficient for all scenarios.
5. **Document Security Considerations:** Add comments in the `Makefiile` to highlight the security considerations mentioned above.
6. **Consider Go for Complex Logic:** If the build process becomes more complex, consider refactoring it into a Go program for better maintainability and testability.
7. **Input Validation for `OLLAMA_MODEL`:** Add validation to ensure the `OLLAMA_MODEL` variable is a valid model name.

This feedback should help improve the code quality, reliability, and security of the OADP operator build process.
```

Signed-off-by: Tiger Kaovilai <[email protected]>
@openshift-ci openshift-ci bot requested review from mrnold and sseago June 30, 2025 23:51
Copy link

openshift-ci bot commented Jun 30, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: kaovilai

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 30, 2025
@weshayutin

This comment was marked as resolved.

@kaovilai

This comment was marked as resolved.

kaovilai added 5 commits July 1, 2025 10:59
…d updating model configurations

Signed-off-by: Tiger Kaovilai <[email protected]>
Signed-off-by: Tiger Kaovilai <[email protected]>
```
❯ make ai-review-gptme-ollama
Using Container Tool: docker
bash: /Users/tkaovila/oadp-operator-opencode/bin/operator-sdk: No such file or directory
bash: /Users/tkaovila/oadp-operator-opencode/bin/opm: No such file or directory
gptme is already installed at /Users/tkaovila/oadp-operator-opencode/bin/gptme
Ollama not detected, starting container...
c3a66f9c2d48dd76b73790657f8f35a17323767dc8be9272640e960af1ad2251
Waiting for Ollama to be ready...
Ollama is ready!
Ensuring gemma3:12b model is available...
pulling manifest
pulling e8ad13eff07a: 100% ▕██████████████████▏ 8.1 GB
pulling e0a42594d802: 100% ▕██████████████████▏  358 B
pulling dd084c7d92a3: 100% ▕██████████████████▏ 8.4 KB
pulling 3116c5225075: 100% ▕██████████████████▏   77 B
pulling 6819964c2bcf: 100% ▕██████████████████▏  490 B
verifying sha256 digest
writing manifest
success
Reviewing staged changes with gptme using Ollama model: gemma3:12b...
gptme will be able to read files and browse documentation for context.
[21:14:27] Browser tool available (using lynx)
[21:14:27] WARNING  Skipping all confirmation prompts.                                                                                                     cli.py:176
           Using model: local/gemma3:12b
           Using logdir ~/.local/share/gptme/logs/2025-07-01-sneaking-crazy-pelican
           Using workspace at ~/oadp-operator-opencode
Skipped 1 hidden system messages, show with --show-hidden
--- ^^^ past messages ^^^ ---
User:
Review this git diff for a project called OADP (OpenShift API for Data Protection) operator. Focus on: 1. Code quality and best practices 2. Potential bugs or issues
3. Go idioms and conventions 4. Kubernetes/OpenShift operator patterns 5. Security concerns Please provide actionable feedback. Be concise but thorough. If able,
browse linked URLs for context.

```stdin
diff --git a/Makefile b/Makefile
index 2da6e68..c5e7d35 100644
--- a/Makefile
+++ b/Makefile
@@ -704,7 +704,7 @@ endef
 # Default Ollama model (using a smaller model that requires less memory)
 OLLAMA_MODEL ?= gemma3:12b
 # will require at least this much free mem in your machine or podman machine (non-linux)
-OLLAMA_MEMORY ?= 9
+OLLAMA_MEMORY ?= 11

 # This target reviews staged changes using gptme with Ollama backend
 # Prerequisites:
```
Assistant:
Okay, I will review the provided `Makefile` diff for the OADP operator project, focusing on the requested areas.

`<thinking>` The diff is very small, only changing a variable `OLLAMA_MEMORY` from 9 to 11. This suggests a change related to resource requirements for the GPTME integration. I'm going to focus my review on the implications of this change and the surrounding context.`</thinking>

**Review of `Makefile` Diff**

*   **Code Quality & Best Practices:** The change itself is trivial and doesn't introduce any immediate code quality concerns. However, the variable name `OLLAMA_MEMORY` is not self-documenting. A comment explaining *why* the value was increased would be beneficial.
*   **Potential Bugs/Issues:** Increasing `OLLAMA_MEMORY` without understanding the root cause of the original value (9) could mask an underlying problem. It's possible the original value was artificially low due to a misconfiguration or resource constraint.  It's crucial to investigate *why* more memory is now needed.
*   **Go Idioms & Conventions:** Not applicable, as this is a Makefile, not Go code.
*   **Kubernetes/OpenShift Operator Patterns:** Not applicable.
*   **Security Concerns:**  The change itself doesn't introduce direct security concerns. However, if the increased memory allocation leads to resource exhaustion, it *could* create a denial-of-service vulnerability.  Monitoring resource usage after this change is important.

**Actionable Feedback:**

1.  **Add a comment:**  Add a comment to the `Makefile` explaining *why* `OLLAMA_MEMORY` was increased from 9 to 11.  For example:  `# Increased OLLAMA_MEMORY to 11 to resolve memory allocation issues with larger prompts.`
2.  **Investigate Root Cause:** Determine why the original value of 9 was insufficient.  Was it a genuine resource constraint, or a configuration issue?
3.  **Monitor Resource Usage:** After deploying with the new `OLLAMA_MEMORY` value, monitor the resource usage of the GPTME integration to ensure it's stable and doesn't lead to resource exhaustion.
Stopping and removing Ollama container...
ollama
ollama
```

Signed-off-by: Tiger Kaovilai <[email protected]>
Signed-off-by: Tiger Kaovilai <[email protected]>
Makefile Outdated

# Common AI review prompt
define AI_REVIEW_PROMPT
Review this git diff for a project called OADP (OpenShift API for Data Protection) operator. Focus on: \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Start small please, this prompt is TOO general, and too big.
Let's get feedback from the team with regards to what would be smart to focus on first.

Even with 11G of ram provided, this is quite slow.
IMHO a more effective first pass, MVP would be much faster and focus on one or two items of code quality. IMHO the team should discuss and pick one or two items together.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having a smaller prompt won't speed this up, it all depends on how big the diff is being passed and the machine..

and now you know why local models won't ever work to help day to day coding. You'll spend all day waiting without H100.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or be stuck with a dumber small models that can't do anything more complex than one line tab completion.

@weshayutin
Copy link
Contributor

my resutls

[08:41:59] WARNING  Skipping all confirmation prompts.                                                                                                        cli.py:176
[08:41:59] Using model: local/gemma3:12b                                                                                                                                
           Using logdir ~/.local/share/gptme/logs/2025-07-02-swimming-funny-mouse                                                                                       
           Using workspace at ~/OPENSHIFT/git/OADP/oadp-operator                                                                                                        
Skipped 1 hidden system messages, show with --show-hidden
--- ^^^ past messages ^^^ ---
           WARNING  Browser tool not available, skipping URL read                                                                                         context.py:543
User: 
Review this git diff for a project called OADP (OpenShift API for Data Protection) operator. Focus on: 1. Code quality and best practices 2. Potential bugs or issues 3.
Go idioms and conventions 4. Kubernetes/OpenShift operator patterns 5. Security concerns Please provide actionable feedback. Be concise but thorough. If able, browse 
linked URLs for context.

```stdin
diff --git a/Makefile b/Makefile                                                                                                                                        
index b9d053c4..b8591e22 100644                                                                                                                                         
--- a/Makefile                                                                                                                                                          
+++ b/Makefile                                                                                                                                                          
@@ -740,34 +740,36 @@ ai-review-gptme-ollama: gptme ## Review staged git changes using gptme with loca                                                                  
    @# Check if Ollama is already available (either as existing container or local service)                                                                             
    @if curl -s http://localhost:11434/api/tags >/dev/null 2>&1; then \                                                                                                 
        echo "Ollama is already running on port 11434"; \                                                                                                               
-       OLLAMA_EXTERNAL=1; \                                                                                                                                            
    else \                                                                                                                                                              
-       OLLAMA_EXTERNAL=0; \                                                                                                                                            
        echo "Ollama not detected, starting container..."; \                                                                                                            
        mkdir -p .ollama; \                                                                                                                                             
-       if ! podman ps | grep -q ollama; then \                                                                                                                         
-           podman run -d \                                                                                                                                             
-               -v $$(pwd)/.ollama:/root/.ollama \                                                                                                                      
-               -p 11434:11434 \                                                                                                                                        
-               --memory=$(OLLAMA_MEMORY)g \                                                                                                                            
-               --memory-swap=$(OLLAMA_MEMORY)g \                                                                                                                       
-               --name ollama \                                                                                                                                         
-               ollama/ollama || exit 1; \                                                                                                                              
-           echo "Waiting for Ollama to be ready..."; \                                                                                                                 
-           for i in $$(seq 1 30); do \                                                                                                                                 
-               if curl -s http://localhost:11434/api/tags >/dev/null 2>&1; then \                                                                                      
-                   echo "Ollama is ready!"; \                                                                                                                          
-                   break; \                                                                                                                                            
-               fi; \                                                                                                                                                   
-               if [ $$i -eq 30 ]; then \                                                                                                                              
-                   echo "Error: Ollama failed to start within 30 seconds"; \                                                                                           
-                   podman logs ollama; \                                                                                                                               
-                   podman stop ollama && podman rm ollama; \                                                                                                           
-                   exit 1; \                                                                                                                                           
-               fi; \                                                                                                                                                   
-               sleep 1; \                                                                                                                                              
-           done \                                                                                                                                                      
-       fi \                                                                                                                                                            
+   fi                                                                                                                                                                  
+   @# Start Ollama container if not already running                                                                                                                    
+   @if ! curl -s http://localhost:11434/api/tags >/dev/null 2>&1 && ! podman ps | grep -q ollama; then \                                                               
+       podman run -d \                                                                                                                                                 
+           -v $${PWD}/.ollama:/root/.ollama:Z \                                                                                                                        
+           -p 11434:11434 \                                                                                                                                            
+           --memory=$(OLLAMA_MEMORY)g \                                                                                                                                
+           --memory-swap=$(OLLAMA_MEMORY)g \                                                                                                                           
+           --name ollama \                                                                                                                                             
+           docker.io/ollama/ollama || exit 1; \                                                                                                                        
+   fi                                                                                                                                                                  
+   @# Wait for Ollama to be ready                                                                                                                                      
+   @if ! curl -s http://localhost:11434/api/tags >/dev/null 2>&1; then \                                                                                               
+       echo "Waiting for Ollama to be ready..."; \                                                                                                                     
+       for i in $$(seq 1 30); do \                                                                                                                                     
+           if curl -s http://localhost:11434/api/tags >/dev/null 2>&1; then \                                                                                          
+               echo "Ollama is ready!"; \                                                                                                                              
+               break; \                                                                                                                                                
+           fi; \                                                                                                                                                       
+           if [ $$i -eq 30 ]; then \                                                                                                                                  
+               echo "Error: Ollama failed to start within 30 seconds"; \                                                                                               
+               podman logs ollama; \                                                                                                                                   
+               podman stop ollama && podman rm ollama; \                                                                                                               
+               exit 1; \                                                                                                                                               
+           fi; \                                                                                                                                                       
+           sleep 1; \                                                                                                                                                  
+       done; \                                                                                                                                                         
    fi                                                                                                                                                                  
    @# Pull model if not already cached                                                                                                                                 
    @echo "Ensuring $(GPTME_OLLAMA_MODEL) model is available..."

Assistant:
Okay, I'm reviewing the provided Makefile diff. Here's a breakdown of my findings, focusing on the requested areas, with actionable feedback.

Overall Impression: The changes aim to improve the reliability of starting the Ollama container for AI review. The previous implementation had some race conditions and wasn't robust. The new version addresses these issues, but there's still room for improvement.

1. Code Quality and Best Practices

  • Variable Expansion: The use of $$(...) for command substitution within shell commands is generally good for avoiding issues with variable expansion. However, it's inconsistent. The PWD variable should also be expanded using $$(...).
  • Readability: The nested if statements are a bit difficult to follow. Consider breaking them down further or using a more structured approach (e.g., functions).
  • Error Handling: The exit 1 after podman run is good for stopping the build on failure.
  • Portability: Using docker.io/ollama/ollama is better than just ollama/ollama for specifying the image registry.

2. Potential Bugs or Issues

  • Race Condition (Minor): Even with the improved logic, there's a slight race condition. The script checks if Ollama is running before attempting to start it. If the container is in a transient state (e.g., just starting up), the check might fail, leading to a redundant start. This is less critical than the previous issue, but worth noting.
  • Volume Mount: The volume mount -v $$(pwd)/.ollama:/root/.ollama is problematic. pwd expands to the absolute path, which might not be what's intended. Using $PWD is better, but it's still an absolute path. Consider using a relative path or a more robust way to handle the Ollama data directory. The addition of :Z is good for shared mode.
  • Ollama Readiness Check: The readiness check relies on curl to the /api/tags endpoint. While generally reliable, it's possible for the endpoint to be temporarily unavailable. A more sophisticated readiness check might be beneficial (e.g., checking for a specific model to be downloaded).
  • Logging: The script only logs errors when Ollama fails to start within the timeout. Adding more logging throughout the process would be helpful for debugging.

3. Go Idioms and Conventions

  • This section isn't directly applicable, as the code is a Makefile, not Go code.

4. Kubernetes/OpenShift Operator Patterns

  • This section isn't directly applicable, as the code is a Makefile, not Kubernetes/OpenShift operator code.

5. Security Concerns

  • Image Registry: Using docker.io/ollama/ollama is good, as it explicitly specifies the image registry. Ensure that the registry is trusted.
  • Data Directory: The Ollama data directory (.ollama) contains potentially sensitive information (e.g., downloaded models). Ensure that the directory has appropriate permissions and is not publicly accessible.

Actionable Feedback (with Patch)

Here's a patch incorporating the suggested improvements:

--- a/Makefile
+++ b/Makefile
@@ -740,36 +740,36 @@ ai-review-gptme-ollama: gptme ## Review staged git changes using gptme with loca
         @# Check if Ollama is already available (either as existing container or local service)
         @if curl -s http://localhost:11434/api/tags >/dev/null 2>&1; then \
                 echo "Ollama is already running on port 11434"; \
-                OLLAMA_EXTERNAL=1; \
         else \
-                OLLAMA_EXTERNAL=0; \
                 echo "Ollama not detected, starting container..."; \
-                mkdir -p .ollama; \
-                if ! podman ps | grep -q ollama; then \
-                        podman run -d \
-                                -v $$(pwd)/.ollama:/root/.ollama \
-                                -p 11434:11434 \
-                                --memory=$(OLLAMA_MEMORY)g \
-                                --memory-swap=$(OLLAMA_MEMORY)g \
-                                --name ollama \
-                                ollama/ollama || exit 1; \
-                        echo "Waiting for Ollama to be ready..."; \
-                        for i in $$(seq 1 30); do \
-                                if curl -s http://localhost:11434/api/tags >/dev/null 2>&1; then \
-                                        echo "Ollama is ready!"; \
-                                        break; \
-                                fi; \
-                                if [ $$i -eq 30 ]; then \
-                                        echo "Error: Ollama failed to start within 30 seconds"; \
-                                        podman logs ollama; \
-                                        podman stop ollama && podman rm ollama; \
-                                        exit 1; \
-                                fi; \
-                                sleep 1; \
-                        done \
-                fi \
+        fi
+        @# Start Ollama container if not already running
+        @if ! curl -s http://localhost:11434/api/tags >/dev/null 2>&1 && ! podman ps | grep -q ollama; then \
+                podman run -d \
+                        -v $${PWD}/.ollama:/root/.ollama:Z \
+                        -p 11434:11434 \
+                        --memory=$(OLLAMA_MEMORY)g \
+                        --memory-swap=$(OLLAMA_MEMORY)g \
+                        --name ollama \
+                        docker.io/ollama/ollama || exit 1; \
+        fi
+        @# Wait for Ollama to be ready
+        @if ! curl -s http://localhost:11434/api/tags >/dev/null 2>&1; then \
+                echo "Waiting for Ollama to be ready..."; \
+                for i in $$(seq 1 30); do \
+                        if curl -s http://localhost:11434/api/tags >/dev/null 2>&1; then \
+                                echo "Ollama is ready!"; \
+                                break; \
+                        fi; \
+                        if [ $$i -eq 30 ]; then \
+                                echo "Error: Ollama failed to start within 30 seconds"; \
+                                podman logs ollama; \
+                                podman stop ollama && podman rm ollama; \
+                                exit 1; \
+                        fi; \
+                        sleep 1; \
+                done; \
         fi
         @# Pull model if not already cached
         @echo "Ensuring $(GPTME_OLLAMA_MODEL) model is available..."

This patch incorporates the suggested changes, including using $PWD for the volume mount and explicitly specifying the image registry. Further improvements could involve more sophisticated readiness checks and more detailed logging.
Stopping and removing Ollama container...
ollama
ollama

- Move AI_REVIEW_PROMPT definition from Makefile to ai/Makefile/Prompt/prompt.example
- Add conditional include logic to use custom prompt if ai/Makefile/Prompt/prompt exists
- Falls back to prompt.example if custom prompt not defined
- Allows users to customize AI review prompts without modifying Makefile

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
Copy link

openshift-ci bot commented Jul 2, 2025

@kaovilai: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@weshayutin
Copy link
Contributor

@kaovilai so I think you have this in a place to share w/ the larger group tomorrow at scrum if you are ok w/ that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants