First thing is to get a general setup for Gemini Code Assist
- In vscode install the extention "Gemini Code Assist", with a normal install (select click install)
- Once installed click the gemini styleized ♦ icon in the installed extensions bar
- Finally click the login button and login/authorize your account to use gemini code assist.
At this point you will have basic access to gemini code assist but may recieve a notice for access to a valid license.
Now you will have access to code assist from 4 basic levels
- Auto-complete - Gemini will start to automatically prompt you in a file for what it "thinks" you want. As with most types of autocomplete you will be prompted and can click 'tab' to accept the item.
- Generate Code - in a file that is generally for ctrl/cmd + i and vscode will prompt you to do something. you can prompt it in the window or have given it instuctions in the file already (i.e. a comment in the Dockerfile saying "Generate a dockerfile for running a nodejs container).
- Explain Code - in an existing file you can hightlight the code and right-click use the context menu to "explain this" code. The gemini chat pain will give an explaination as to what is happening to the code.
- Chat Pane - in the chat pain of the extention you can interact as a fairly normal AI chat, however it also allows you to connect with files using the @files keywork and can ask more complex questions about the code.
First and foremost Gemini and other code generation utilities are still very dependent on you "knowing what you are doing".
Whether you are creating a project from scratch or editing (adding a feature, fixing a bug) an existing project you are both responsible to ensure you know how to ask gemini for what you need (how to prompt it) and know that that output is what you wanted.
This comes to the first prompt that you will need.
So first and foremost lets set the stage for what we are going to build.
As a neophyte I may get a request for an API or just a way to get data into/out of a system.
Even if I ask Gemini something as simple as:
Build me an API
It will respond with some of the questions that it will need to know the answers to. In my case writing this Gemini has reponded with a list of things it needs to know.
1. Clarifying Questions:
Before I can start generating code, I need to understand the basics of your API. Please tell me:
* What is the purpose of this API? What kind of data will it handle? What will it be used for? (e.g., a blog API, a product catalog API, a user management API, etc.)
* What kind of data will the API work with? Can you describe the data structure (e.g., a user has a name, email, and password; a product has a name, description, and price)?
* What operations will the API support? (e.g., creating new data, reading existing data, updating data, deleting data). These are often referred to as CRUD operations (Create, Read, Update, Delete).
* What programming language and framework do you prefer? (e.g., Python with Flask or Django, Node.js with Express, Go, Java with Spring Boot, etc.) If you don't have a preference, I can suggest some popular options.
* Do you have any specific requirements for authentication or authorization? (e.g., no authentication, API keys, OAuth 2.0).
* Do you have a database in mind? (e.g. Postgres, MongoDB, MySQL, SQLite)
As you can see Gemini is already asking me about things that I may or maynot be able to understand, but lets go through this first prompt and see what we can answer.
Q. What is the purpose of this API? What kind of data will it handle? What will it be used for? (e.g., a blog API, a product catalog API, a user management API, etc.)
A. Lets keep this simple and build a blog API, Gemini will likely have lots of examples of this already
Q. What kind of data will the API work with? Can you describe the data structure (e.g., a user has a name, email, and password; a product has a name, description, and price)?
A. Again lets keep this simple and say our data will have a basic structure for a blog.
blog_title, blog_body, blog_date, blog_author, blog_category
Notice for the above that Gemini may "recommend" some properties for what it knows, but it cannot tell you your data structure or requirements, you must know those yourself.
Q. What programming language and framework do you prefer? (e.g., Python with Flask or Django, Node.js with Express, Go, Java with Spring Boot, etc.) If you don't have a preference, I can suggest some popular options.
A. Notice again that Gemini cannot tell you what programming language you are going to or want to use. For our purpooses I am going to specify Node.js with Express for our API, but often this will need to be decided by you directly or may be defined (or subset) by your organization. What languages and frameworks am I allowed to use?
Q. Do you have any specific requirements for authentication or authorization? (e.g., no authentication, API keys, OAuth 2.0)
A.
Mostly for simplicity, lets keep this open and say there is no authentiction
, in a real production environment this may or may not be required. General authentication may be handled by another component in your architecture (e.g. an API Gateway), directly by your application or somewhere in between.
Q. Do you have a database in mind? (e.g. Postgres, MongoDB, MySQL, SQLite)
A. Finally lets define a datastore for our application, which we will define as Postgres. Again similar to the programming language/framework this maybe decided for you or may be open to options, but you need to know this, Gemini cannot tell you what to use (although it can recommend)
So now we have the basics that gemini has asked for, so what next? Well on the previous prompt Gemini may have also told you what is coming next. In my case it has told me:
* Define API Endpoints: I'll help you design the URLs (endpoints) that clients will use to interact with your API.
* Define Data Models: I'll help you define the structure of the data that your API will handle.
* Implement API Logic: I'll generate code for the API endpoints, including handling requests, processing data, and returning responses.
* Add Error Handling: I'll include code to handle potential errors and return appropriate error messages.
* Add Documentation: I'll generate documentation for your API, explaining how to use it.
* Add Database Connection: I will add the code to connect to the database you choose.
* Add Authentication: I will add the code to authenticate users.
There is some other information here, but lets start with the basic prompt:
Data: "blog_title, blog_body, blog_date, blog_author, blog_category"
Operations: "add a new book, get a list of all blogs, get details about a specific blog, update a blog's information, delete a blog"
Language: "I would like to use Node.js with Express"
Database: "I would like to use Postgres"
of note google products in particlar like prompts to be in markdown (md) format, while this isn't universal it does seem to be something that LLMs seem to understand. Keep this in mind when you start writing prompts that say "IMPORTANT:" or something of this ilk.
Gemini Code Assist's Response is basically what you would expect, defining with a bit more detail what I have given it and now I need to review this and decide if it is correct.
Response
API Requirements:
* Purpose: Blog API
* Data:
* blog_title (string)
* blog_body (string)
* blog_date (date/timestamp)
* blog_author (string)
* blog_category (string)
* Operations (CRUD):
* Create: Add a new blog post (referred to as "add a new book" in the prompt, but we'll use "blog post" for consistency).
* Read:
* Get a list of all blog posts.
* Get details about a specific blog post.
* Update: Update a blog post's information.
* Delete: Delete a blog post.
* Language & Framework: Node.js with Express
* Database: PostgreSQL (Postgres)
* Authentication: No authentication (for now)
Now it is important that I review this as there is one thing that I immediately notice we should change. This is the "Get a list of all blog posts" item. We don't need to gold plate everything, but there could be a lot (hundreds, thousands, hundreds of thousands) blog posts so a "get all" option is probably not great for our application both functionally and secuirty wise (some type of ddos would be easy once we have a lot of blog posts). So I am going to ask it to add 2 other options.
- Add an end point for getting a list bounded by 2 dates
- Add an end point for getting a list based on the category
So now lets edit the prompt:
Data: "blog_title, blog_body, blog_date, blog_author, blog_category"
Operations: "add a new book, get a list of blogs between 2 dates, get a list of blogs for a category, get details about a specific blog, update a blog's information, delete a blog"
Language: "I would like to use Node.js with Express"
Database: "I would like to use Postgres"
Now the response is more detailed, but is showing the changes that I have asked for.
Updated API Requirements:
Based on the latest prompt, here's the refined set of requirements:
* Purpose: Blog API
* Data:
* blog_title (string)
* blog_body (string)
* blog_date (date/timestamp)
* blog_author (string)
* blog_category (string)
* Operations (CRUD + Filtering):
* Create: Add a new blog post.
* Read:
* Get a list of blog posts between two dates.
* Get a list of blog posts for a specific category.
* Get details about a specific blog post.
* Update: Update a blog post's information.
* Delete: Delete a blog post.
* Language & Framework: Node.js with Express
* Database: PostgreSQL (Postgres)
* Authentication: No authentication (for now)
It has also given me a much more detailed view of the API itself, but we will review that in the next section.
Along with the details in the previous section we are now getting to the real details of the code. First lets start with the end points that Gemini has defined.
Updated API Endpoints:
Here's how I'll adjust the API endpoints to accommodate the new filtering requirements:
* Create a new blog post:
* POST /blogs
* Request Body: JSON data representing the new blog post.
* Response: The created blog post data, or an error message.
* Get a list of blog posts between two dates:
* GET /blogs/bydate
* Query Parameters:
* startDate (date/timestamp - ISO 8601 format)
* endDate (date/timestamp - ISO 8601 format)
* Response: An array of blog post objects within the specified date range.
* Get a list of blog posts for a specific category:
* GET /blogs/bycategory
* Query Parameter:
* category (string)
* Response: An array of blog post objects in the specified category.
* Get details about a specific blog post:
* GET /blogs/:id
* id: The unique identifier of the blog post.
* Response: The blog post data, or an error message if not found.
* Update a blog post:
* PUT /blogs/:id
* id: The unique identifier of the blog post.
* Request Body: JSON data with the updated blog post information.
* Response: The updated blog post data, or an error message.
* Delete a blog post:
* DELETE /blogs/:id
* id: The unique identifier of the blog post.
* Response: A success message, or an error message.
Now there are a few things that you will start to notice that Gemini is doing without us telling.
- It is assuming a REST API without our input. Again this is just something you will need to know as we may have wanted/needed a graphql api. In this case the assumption was correct.
- It has also started to define things for us as a more detailed level. Notable here is the date format of ISO 8601. Again I would consider this "correct", but may have wanted an epoch or other format so you will need to pay attention to this.
- It has started to use the idea of an "id" which we have yet to define or identify so this will need to be watched closely in the future ensuring things align.
As well, with the above detail Gemini has started to give me "real code" that I can copy and paste to the correct places.
as a note here I won't cover all of the iterations of code and the basic "hows" of node.js/express as such there may be items that you will need to research if you don't understand completely.
Now lets start with the real code and how things are structured and will work.
First will be to see what files will need to be created to start. Scrolling up in the chat window you should see some general project setup that we can start with.
* Create a new directory for your project (e.g., blog-api).
* Navigate to the directory in your terminal.
* Run npm init -y to initialize a new Node.js project.
* Run npm install express pg to install Express and the PostgreSQL client
In our case we are going to follow some general git repo best practices so we will create an src
folder for the code.
Next it is asking us to navigate to the folder and initialize the project. This now opens our next issue for what I may want. Rather than install node locally I would rather nodejs be running in a container for development and production purposes so the first issues is how to do this.
First lets try this prompt:
I would like to develop this project in a container
Mostly properly it has indicated to me that I will need 3 items:
1. Creating a Dockerfile: Defining the environment and steps to build the container image.
2. Creating a docker-compose.yml file: Orchestrating the application container and the PostgreSQL database container.
3. Adjusting the server.js file: Making sure the application can connect to the database within the container network.
Create the docker file in the root folder and then paste the contents of the file that Gemini recommends.
Now we have the first issue as the file has the line COPY . .
. This basically means that the dockerfile and the application need to be in the same folder. However as we are following some best practices the Dockerfile is in the root folder, but our code is in the src folder so again can we prompt this.
Prompt:
my application code will be in the src folder
Create the docker-compose.yml file in the root folder along with the Dockerfile and then paste the contents of that file that Gemini recommends.
Create the server.js file in the ./src
folder and then paste the contents of that file that Gemini recommends.
Finally create he blogRoutes.js file in the ./src
folder and then paste the contents of that file that Gemini recommends.
Now in theory we should have working code, although there are a few things that haven't really lined up yet so we can use this as a point to introduce debugging.
First up, lets just try to compile our container. This is handled with the command docker-compose build
. This should start okay if Docker is installed properly, but you will eventually get an error that looks something like:
...
=> [app 3/5] COPY package*.json ./ 0.0s
=> ERROR [app 4/5] RUN npm install 0.6s
------
> [app 4/5] RUN npm install:
0.596 npm error code ENOENT
0.596 npm error syscall open
0.596 npm error path /app/package.json
0.596 npm error errno -2
0.596 npm error enoent Could not read package.json: Error: ENOENT: no such file or directory, open '/app/package.json'
0.596 npm error enoent This is related to npm not being able to find a file.
0.596 npm error enoent
0.597 npm error A complete log of this run can be found in: /root/.npm/_logs/2025-03-13T18_28_26_938Z-debug-0.log
------
failed to solve: process "/bin/sh -c npm install" did not complete successfully: exit code: 254
...
Notably you can see the error is with the line RUN npm install
in the Dockerfile and that it is looking for a file called package.json. Without digging too much we can see this might because we didn't call the npm init -y
So lets prompt Gemini again.
where should the `npm init -y` command be called with a container
Gemini does reasonably respond to this, but still wants me to run the command locally. At this point rather than argue with it I am just going to run the command under the container however we need to be able to run the container first. To do this we will need to run the commands below from the root folder.
docker run -it -v ./src:/app node:18-alpine /bin/sh
# wait for the container to load
cd app
npm init -y
#exit the container
exit
Now you should see a package.json file in the ./src folder which will attempt us to compile the docker image.
So again we can call the docker-compose build
command.
And again we will get a similar error to the first looking for the package.json file.
However we know the file is there. This is probably the first real mistake that Gemini has made, as it already has all of the information required to know what is going on. We could probably get Gemini it to correct this with some more detailed prompting, but basically Gemini doesn't really understand the context difference between the Dockerfile build process and the nodejs execution process so it getting confused. We just need to be sure the package.json file is in the place nodejs is looking for it. In the Dockerfile the COPY line needs to be corrected to:
COPY ./src/package*.json ./
So again we can call the docker-compose build
command.
Success Finally.
Next we will try to run our containers for the first time. Notice I am saying "containers" as we have 2. If you look in the docker-compose.yml file you can see that there are 2 containers.
- Running our application
- Running our PostgreSQL datbase
Gemini has already done much of the plumming for us due to the fact that we told it we wanted to use a postgres database. So lets just see if it did a good job at this already. First up lets run the command to launch the containers.
docker-compose up --build
And just the add to this if you are new to docker, to stop the agents you will need to use a ctrl+c
keyboard combo as well you should run the docker-compose down
command between the up
and ctrl+c
keyboard command at least until you understand when you need to do this.
Now you will see an error similar to before, but it maybe harder as it will be on our container called app
in our docker-compose.yml file. The error will likely look similar to:
app-1 | node:internal/modules/cjs/loader:1143
app-1 | throw err;
app-1 | ^
app-1 |
app-1 | Error: Cannot find module 'express'
app-1 | Require stack:
app-1 | - /app/server.js
app-1 | at Module._resolveFilename (node:internal/modules/cjs/loader:1140:15)
Again this is something that Gemini told us to do a while back, but we didn't require it until now. Again Gemini was telling us to install something locally, but we don't really want to do that so we are going to use the same trick as before with the package.json file.
Gemini Instruction was:
Run `npm install express pg` to install Express and the PostgreSQL client.
So we are going to us the same trick as before and run this code.
docker run -it -v ./src:/app node:18-alpine /bin/sh
# wait for the container to load
cd app
npm install express pg
#exit the container
exit
Now if you check the package.json file you can see that our new dependacies have been added.
...
"dependencies": {
"express": "^4.21.2",
"pg": "^8.14.0"
}
...
So lets bring the system up again and it seems successful, both containers have come up without errors so we can move to the next step which is a basic runtime check of our application.
Well back in the Gemini output we were given some commands to run to test our application. As we are only at the smoke test phase I will focus only on the simplest secario which is to get a list of blogs from our application. For this Gemini has given this command:
curl http://localhost:3000/blogs/bycategory?category=Test
This will return the almost expected result of "Server Error" which is a default response from the node.js code. By looking at the output on your Docker shell you can see more information:
app-1 | Blog API server listening on port 3000
app-1 | Connected to PostgreSQL database
app-1 | pool.query is not a function
app-1 | (node:1) Warning: Accessing non-existent property 'query' of module exports inside circular dependency
app-1 | (Use `node --trace-warnings ...` to show where the warning was created)
So this is showing that we can connect to the database, but that pool.query doesn't exist?
This appears to be the second real mistake by Gemini, but lets see if Gemini can fix it.
Lets prompt Gemini with some info:
The pool object is not available in the blogRoutes.js file. Here is the error pool.query is not a function app-1 | (node:1) Warning: Accessing non-existent property 'query' of module exports inside circular dependency
There is a lot in the response as it is changing files, but it appears that Gemini can see the error of its ways and correct the files. Below I will put the pertinant parts of the response.
Key Changes in server.js:
* module.exports = pool;: This line is crucial. It makes the pool object available to other modules that require('./server.js').
* app.use('/blogs', blogRoutes(pool));: This line now passes the pool object to the blogRoutes function.
....
Key Changes in blogRoutes.js:
* const blogRoutes = (pool) => { ... }: The blogRoutes function now accepts the pool as an argument.
* const result = await pool.query(...): Inside each route handler, you can now use pool.query to interact with the database.
* module.exports = blogRoutes;: This line exports the function that accepts the pool.
Finally it appears we can now select from our database, however if you have been paying attention you will noticed we have never created our database and tables and as such that is really the next thing we need to do.
As a concept I am going to assume we want to run our database create script with a container. In practice this would be via a utility similar to liquibase. So let us try this.
Prompt:
I would like to run database table create script with liquibase in a container
Gemini gives back the response you would expect updating the docker-compose files and giving the correct liquibase files. However first up I don't really want the liquibase syntax and would prefer just standard SQL to be used by liquibase for creating my tables. So again lets see.
Prompt:
can you use sql files with liquibase?
And the answer is Yes
so lets update everything that it is outputing for us.
So lets update with the changes that we are making
- Create a new folder on the root for liquibase and a subfolder for changelog (./liquibase/changelog)
- In the changelog folder add a file called
db.changelog-master.xml
- As indicated by Gemini add this content?
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-latest.xsd" >
<include file="sql/01-create-blogs-table.sql" relativeToChangelogFile="true" />
</databaseChangeLog>
- Create a new folder called
sql
in the liquibase folder (./liquibase/sql) - In the sql folder add a file called
01-create-blogs-table.sql
- As indicated by Gemini add this content:
-- liquibase formatted sql
-- changeset sheen:1
CREATE TABLE blogs (
id SERIAL PRIMARY KEY,
blog_title VARCHAR(255) NOT NULL,
blog_body TEXT NOT NULL,
blog_date TIMESTAMP NOT NULL,
blog_author VARCHAR(255) NOT NULL,
blog_category VARCHAR(255) NOT NULL
);
-- rollback DROP TABLE blogs;
- finally update the docker compose file with
liquibase:
image: liquibase/liquibase:4.24.0
depends_on:
- db
volumes:
- ./liquibase/changelog:/liquibase/changelog
- ./liquibase/sql:/liquibase/sql
environment:
- LIQUIBASE_URL=jdbc:postgresql://db:5432/${PGDATABASE}
- LIQUIBASE_USER=${PGUSER}
- LIQUIBASE_PASSWORD=${PGPASSWORD}
- LIQUIBASE_CHANGELOG_FILE=/liquibase/changelog/db.changelog-master.xml
command: update
Overall this isn't looking too bad, but before we even try to run this lets clean up the db properties. It is notable that Gemini is just "generating" things from scratch each time and while it does a pretty good job of paramterizing items it doesn't do a great job of what it parameterized already and how. Sometimes it gets it sometimes it doesn't. You can see above it has started to introduce environment variables to the file at the same time as it has "hardcoded" the port for postgres. For our purposes we are going to hardcode all of these in the docker compose file and assume they will be env vars in our containers. So these values will need to align with our postgres database values which should look like:
liquibase:
image: liquibase/liquibase:4.24.0
depends_on:
- db
volumes:
- ./liquibase/changelog:/liquibase/changelog
- ./liquibase/sql:/liquibase/sql
environment:
- LIQUIBASE_URL=jdbc:postgresql://db:5432/your_db_name
- LIQUIBASE_USER=your_db_user
- LIQUIBASE_PASSWORD=your_db_password
- LIQUIBASE_CHANGELOG_FILE=/liquibase/changelog/db.changelog-master.xml
command: update
I can see some other items which may be problems, but lets see how good Gemini has done. First up we run our docker compose build and up commands and we get the next error which is:
liquibase-1 | Error parsing command line: Invalid argument '--changelog-file': missing required argument. If you need to configure new liquibase project files and arguments, run the 'liquibase init project' command.
There are 2 main items here. 1 being that liquibase can't fine the file for the change log which we have given already, but also we have yet to init the liquibase project. So lets try to prompt Gemini with this new error message.
I am getting this error message
Error parsing command line: Invalid argument '--changelog-file': missing required argument. If you need to configure new liquibase project files and arguments, run the 'liquibase init project' command.
Well I have tried this a few ways and Gemini cannot seem to give me an answer to this error so I'll got to trusty Google search to what what I can find.
Well here is another one for liquibase, for some reason Gemini is getting the wrong environment variables. Update your docker-compose file with:
- LIQUIBASE_COMMAND_URL=jdbc:postgresql://db:5432/your_db_name
- LIQUIBASE_COMMAND_USERNAME=your_db_user
- LIQUIBASE_COMMAND_PASSWORD=your_db_password
- LIQUIBASE_COMMAND_CHANGELOG_FILE=db.changelog-master.xml
Once this has been updated you can attempt to run your commands again to bring up docker and all things being equal it should be successful. Even if you are just getting a response of []
or an empty array.
Lets now try another basic test of inputing data which Gemini has given us previously:
curl -X POST -H "Content-Type: application/json" -d '{"blog_title": "Test Blog", "blog_body": "This is a test blog", "blog_date": "2024-03-14T12:00:00Z", "blog_author": "Test Author", "blog_category": "Test"}' http://localhost:3000/blogs
This should also run without error and if you now run the initial test again:
curl http://localhost:3000/blogs/bycategory?category=Test
[{"id":1,"blog_title":"Test Blog","blog_body":"This is a test blog","blog_date":"2024-03-14T12:00:00.000Z","blog_author":"Test Author","blog_category":"Test"}]
There should be a real response now and the basics of our code is working.
Finally we have some, but before we can say "complete" obviously we should have some tests here.
Prompt:
How can I unit test this project?
As there is a lot getting output by Gemini is this case I will summarize what the response is.
In a nutshell Gemini is recommending Jest as the testing framework along with Jest's Assertion library which is a resonable response for the library. It is also recommending a folder structure which should be sufficient for our purposes and then finally it is giving a basic response for what to test for both for the REST API and the Database as well as some mocking where necessary.
So first up lets create everything and see how it goes:
- Create a directory off the root for
__tests__
and create a file in that directory called blogRoutes.test.js - Add the content
// __tests__/blogRoutes.test.js
const request = require('supertest');
const express = require('express');
const { Pool } = require('pg');
const blogRoutes = require('../src/blogRoutes'); // Adjust the path if needed
// Mock the database pool
jest.mock('pg', () => {
const mPool = {
connect: jest.fn(),
query: jest.fn(),
end: jest.fn(),
};
return { Pool: jest.fn(() => mPool) };
});
describe('Blog Routes', () => {
let app;
let pool;
beforeAll(() => {
pool = new Pool();
app = express();
app.use(express.json());
app.use('/blogs', blogRoutes(pool));
});
afterAll(async () => {
await pool.end();
});
beforeEach(() => {
pool.query.mockReset();
});
it('should create a new blog post', async () => {
const mockBlog = { id: 1, blog_title: 'Test Blog', blog_body: 'Test Body', blog_date: '2024-03-14T12:00:00.000Z', blog_author: 'Test Author', blog_category: 'Test' };
pool.query.mockResolvedValueOnce({ rows: [mockBlog] });
const response = await request(app)
.post('/blogs')
.send({ blog_title: 'Test Blog', blog_body: 'Test Body', blog_date: '2024-03-14T12:00:00.000Z', blog_author: 'Test Author', blog_category: 'Test' });
expect(response.statusCode).toBe(200);
expect(response.body).toEqual(mockBlog);
expect(pool.query).toHaveBeenCalledWith(
'INSERT INTO blogs (blog_title, blog_body, blog_date, blog_author, blog_category) VALUES($1, $2, $3, $4, $5) RETURNING *',
['Test Blog', 'Test Body', '2024-03-14T12:00:00.000Z', 'Test Author', 'Test']
);
});
it('should get a list of blogs by category', async () => {
const mockBlogs = [{ id: 1, blog_title: 'Test Blog', blog_body: 'Test Body', blog_date: '2024-03-14T12:00:00.000Z', blog_author: 'Test Author', blog_category: 'Test' }];
pool.query.mockResolvedValueOnce({ rows: mockBlogs });
const response = await request(app)
.get('/blogs/bycategory?category=Test');
expect(response.statusCode).toBe(200);
expect(response.body).toEqual(mockBlogs);
expect(pool.query).toHaveBeenCalledWith(
'SELECT * FROM blogs WHERE blog_category = $1',
['Test']
);
});
// Add more tests for other routes...
});
- Create another file called
server.test.js
- Add the content
// __tests__/server.test.js
const request = require('supertest');
const express = require('express');
const { Pool } = require('pg');
const server = require('../src/server'); // Adjust the path if needed
// Mock the database pool
jest.mock('pg', () => {
const mPool = {
connect: jest.fn(),
query: jest.fn(),
end: jest.fn(),
};
return { Pool: jest.fn(() => mPool) };
});
describe('Server', () => {
let app;
let pool;
beforeAll(() => {
pool = new Pool();
app = express();
app.use(express.json());
});
afterAll(async () => {
await pool.end();
});
it('should connect to the database', async () => {
pool.connect.mockResolvedValueOnce();
expect(pool.connect).toHaveBeenCalled();
});
});
- Now we need to install Jest as well as other dependancies required by our tests
docker run -it -v ./src:/app node:18-alpine /bin/sh
# wait for the container to load
cd app
npm install --save-dev jest supertest
#exit the container
exit
- Next update the package.json file with the "test" option as "jest" (i.e. "test": "jest")
- and finally we want to add our last container to the docker-compose.yml file added this service.
test:
build: .
depends_on:
- db
environment:
- PGUSER=your_db_user
- PGHOST=db
- PGDATABASE=your_db_name
- PGPASSWORD=your_db_password
- PGPORT=5432
volumes:
- ./src:/app
- ./__tests__:/app/__tests__
- /app/node_modules
working_dir: /app
command: npm test
To run this we are going to change the docker command just a bit to run the test.
docker-compose up --build --abort-on-container-exit --exit-code-from test test
After running this you should see an error similar to:
FAIL __tests__/blogRoutes.test.js
test-1 | ● Test suite failed to run
test-1 |
test-1 | Cannot find module '../src/blogRoutes' from '__tests__/blogRoutes.test.js'
test-1 |
test-1 | 3 | const express = require('express');
test-1 | 4 | const { Pool } = require('pg');
test-1 | > 5 | const blogRoutes = require('../src/blogRoutes'); // Adjust the path if needed
test-1 | | ^
test-1 | 6 |
test-1 | 7 | // Mock the database pool
test-1 | 8 | jest.mock('pg', () => {
test-1 |
test-1 | at Resolver._throwModNotFoundError (node_modules/jest-resolve/build/resolver.js:427:11)
test-1 | at Object.require (__tests__/blogRoutes.test.js:5:20)
So once again we have a path error. This seems to be a consistant theme with Gemini
This one seems pretty obvious and even Gemini is warning is but these lines in the expected test files need to be updated
const blogRoutes = require('../blogRoutes'); // Adjust the path if needed
and also
const server = require('../server'); // Adjust the path if needed
This gets us to the final version things with the tests fully passing but it still looks like there are little errors.
As I don't want to turn this into a unit testing with Jest seminar so we are going to update with a few final commands.
In your package.json file update the jest command to:
jest --forceExit --detectOpenHandles ---verbose --coverage
And now you should be able to use this command to run your docker componse start to finish, get the coverage numbers and the verbose output that will log/show what is running (and more easily let you identify what is failing).
Now run this command from the shell.
docker-compose up --build --abort-on-container-exit --exit-code-from test test
This would be applicable to put into a CI/CD pipeline output the proper code and shutdown properly for your Unittest during your PR process, but as it is running in docker compose you can run this locally to be sure everything is working before committing the code to github or your repo of choice and running it in the cloud.
As well if you are looking at git you can see thousands of files that are apparently requiring committing, which doesn't make sense to our code that we have written.
First lets just ask Gemini why are there so many files
Why are there so many files to commit to git?
Gemini responds with a good answer and a few solutions. To keep things simple we are going to go with the .gitignore file solution.
.gitignore: It's a good practice to add node_modules to your .gitignore file. This will prevent you from accidentally committing the node_modules directory to your Git repository.
However Gemini didn't give us a .gitingore file to add so next prompt:
can you give me the .gitignore file for node_modules?
And sure enough Gemini gives a reasonable start to a .gitignore file.
- in the root folder create a .gitignore file and then add these contents.
# Node.js
node_modules/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Environment files
.env
.env.*
# Docker
docker-compose.override.yml
.dockerenv
# Coverage directory used by tools like istanbul
coverage/
# Logs
logs
*.log
# Dependency directories
jspm_packages/
# System Files
.DS_Store
Thumbs.db
# IDE files
.idea/
*.iml
# Liquibase
# You might want to ignore the liquibase-output directory if you use it
liquibase-output/
Overall this isn't an enterprise level project. We haven't really dealt with authentication/authorization, various security items code security and I would probably use an ORM for this type of work, but you can easily see how the concepts can be expanded and there are some definitive take aways with this.
- Gemini does a good of looking at your code and current setup and understanding/updating what exists
- Gemini does a good job of looking at what you have and recommending a reasonable solution to the question
- Gemini does a good job of following the instructions you have given it either in the prompt or via markdown files.
- Gemini isn't infalable, and certainly has a difficult time keeping context straight (e.g. inside container vs outside)
- Gemini doesn't know everything, even in this basic example it was giving incorrect environment variables for liquibase that even prompted it wouldn't/couldn't correct
- The integration with the IDE is still lacking. Although there is a level of code complete the copy/paste nature of the prompting wouldn't be adiquite for a large project with hundreds or thousands of tables/files/objects.
It is easy to see that an AI prompt like this could help in several ways. I can easily see that having your archtecture clearly laid out in markdown files (ADRs?) as well as some sample code/templates (possibly even generated by Gemini in the first place) would serve both as Gemini guidance as well as guidance for your developers.
There is some testing here as to whether it is better to have a structure laid out at a data level and allow the system to try and generate code for everything or possibly an OpenAPI spec. Obviously there would be pros and cons to these, but it is easy to see that Gemini and other similar services could replace templating engines in the not too distant future giving a more flexible option to current templates.
It is however notable that with all of this you still need to "know what you are doing". Gemini makes recommendations, it makes mistakes, it cannot fix all of its own bugs. It doesn't give "proper" code on the first past. You will still need to understand what Gemini is giving you, how to code and how to debug in order to make the tools work for you and not against you.