Automated Issue Suggestions With OpenAI API In GitHub Actions
Hey guys! Ever wished you had a little helper that could automatically suggest the next steps for your GitHub issues? Well, buckle up! In this article, we're diving deep into how you can create an automated system using GitHub Actions and the OpenAI API to suggest solutions and next steps for issues. It’s like having a super-smart assistant right in your repository! Let’s get started!
What We're Building
At its core, we're going to build a GitHub Action that triggers whenever a new issue is opened or an existing one is edited. This action will then:
- Grab the title and body of the issue.
- Send this information to the OpenAI API.
- Get a suggestion for the next step or a fix.
- Post this suggestion as a comment on the issue.
Sounds cool, right? Let's break down how we can make this happen.
Setting Up the Workflow
The first thing we need to do is set up our workflow file. This file tells GitHub Actions what to do, when to do it, and how to do it. Let’s create a new file in our repository at .github/workflows/auto-suggest.yml
and paste the following:
# .github/workflows/auto-suggest.yml
name: Auto‑suggest next step
on:
issues:
types: [opened, edited]
jobs:
suggest:
runs-on: ubuntu-latest
permissions:
issues: write # needed to post a comment
steps:
# 1️⃣ Set up Node (or Python) runtime
- uses: actions/setup-node@v3
with:
node-version: '20'
# 2️⃣ Checkout repo (so the script is available)
- uses: actions/checkout@v4
# 3️⃣ Install dependencies (if any)
- run: npm ci # or `pip install -r requirements.txt`
# 4️⃣ Run the suggestion script
- name: Generate suggestion
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
node .github/scripts/auto-suggest.js "${{ github.event.issue.title }}" "${{ github.event.issue.body }}" "${{ github.event.issue.number }}"
Let’s dissect this YAML file piece by piece:
name: Auto-suggest next step
: This is the name of our workflow.on:
: This section defines when the workflow will run.issues:
: This means the workflow will trigger on issue-related events.types: [opened, edited]
: Specifically, it will run when an issue is opened or edited.
jobs:
: This section contains the jobs that will be executed.suggest:
: The name of our job.runs-on: ubuntu-latest
: Specifies that this job will run on a Ubuntu virtual machine.permissions:
:issues: write
: Grants the job permission to post comments on issues. This is crucial for our action to work!
steps:
: This is where we define the individual steps that our job will perform.1️⃣ Set up Node (or Python) runtime
:uses: actions/setup-node@v3
: This uses the official GitHub Action to set up a Node.js environment.with:
:node-version: '20'
: Specifies that we’re using Node.js version 20.
2️⃣ Checkout repo (so the script is available)
:uses: actions/checkout@v4
: This action checks out our repository, making our code available to the workflow.
3️⃣ Install dependencies (if any)
:run: npm ci
: This command installs our project’s dependencies usingnpm ci
. If you’re using Python, you might usepip install -r requirements.txt
instead.
4️⃣ Run the suggestion script
:name: Generate suggestion
: A descriptive name for this step.env:
: Defines environment variables.OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
: This is where we inject our OpenAI API key, stored as a secret in GitHub.GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
: This token is automatically provided by GitHub Actions and allows us to interact with the GitHub API.
run:
: The command that executes our script.node .github/scripts/auto-suggest.js "${{ github.event.issue.title }}" "${{ github.event.issue.body }}" "${{ github.event.issue.number }}"
: This line runs ourauto-suggest.js
script, passing the issue title, body, and number as arguments.
Diving Deeper into the YAML Structure
Let's really break down why each part of this YAML file is crucial for our automation to function correctly. You might be thinking, "Okay, cool, it sets up a workflow," but let's get into the nitty-gritty so you understand what's happening under the hood.
First off, the name: Auto-suggest next step
is more than just a label. It's how this workflow will appear in your repository's Actions tab. A clear name makes it easier to manage multiple workflows. Think of it as the first impression your workflow makes.
Next, the on:
section is the heartbeat of your workflow. It dictates when the magic happens. The issues
trigger with types: [opened, edited]
is incredibly specific. We're telling GitHub, "Hey, only fire this workflow when a new issue is opened or when an existing one is edited." This is super efficient because we're not wasting resources running the workflow on every single issue event, like closing or deleting.
The jobs:
section is where the real work gets defined. A job is a set of steps that execute on the same runner (in our case, ubuntu-latest
). The runs-on
directive is like choosing your workstation. Ubuntu is a popular choice for CI/CD because it's reliable and has a vast ecosystem of tools.
But hold on, the permissions:
section is a critical security consideration. We're explicitly granting issues: write
permission. This is essential for our workflow to post comments, but it's also a best practice to only request the minimum permissions necessary. If we didn't need to post comments, we wouldn't include this. Security first, guys!
Now, the steps:
are the individual actions that make up our job. Each step is a mini-program that performs a specific task. Let's break down the first few steps:
uses: actions/setup-node@v3
: This isn't just some random line; it's a call to a pre-built GitHub Action. GitHub Actions are reusable units of code that you can plug into your workflows. This particular action sets up a Node.js environment. Think of it as installing Node.js on our virtual machine.with: node-version: '20'
: This is how we configure thesetup-node
action. We're telling it to use Node.js version 20. Specifying a version is important for consistency and to avoid compatibility issues.uses: actions/checkout@v4
: Another pre-built action! This one is incredibly important. It checks out your repository's code onto the runner. Without this, our script wouldn't be able to run because it wouldn't have access to the code.run: npm ci
: This is where we start running shell commands.npm ci
is similar tonpm install
, but it's optimized for CI/CD environments. It ensures a clean and consistent installation of dependencies based on yourpackage-lock.json
file. If you're using Python, you'd usepip install -r requirements.txt
here.
The 4️⃣ Run the suggestion script
step is where the magic truly happens. Notice the env:
section? We're defining environment variables here. OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
is crucial. We're injecting our OpenAI API key, which is stored as a secret in GitHub. Never hardcode your API keys! Storing them as secrets keeps them safe.
And finally, the run:
command. This is the command that actually executes our script: node .github/scripts/auto-suggest.js "${{ github.event.issue.title }}" "${{ github.event.issue.body }}" "${{ github.event.issue.number }}"
. We're passing the issue title, body, and number as arguments to our script. These are provided by GitHub Actions through the github.event
context.
Crafting the Suggestion Script
Next up, we need to create the script that will interact with the OpenAI API. Create a new file at .github/scripts/auto-suggest.js
and paste the following code:
const https = require('https');
const { execSync } = require('child_process');
const [title, body, issueNumber] = process.argv.slice(2);
const prompt = `Given this issue description, suggest the most likely next step or fix in one sentence.\n\nTitle: ${title}\n\nBody: ${body}`;
const data = JSON.stringify({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: prompt }],
max_tokens: 60,
});
const options = {
hostname: 'api.openai.com',
path: '/v1/chat/completions',
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
},
};
const req = https.request(options, (res) => {
let chunks = '';
res.on('data', (d) => (chunks += d));
res.on('end', () => {
const reply = JSON.parse(chunks).choices[0].message.content.trim();
// Post comment back to the issue
execSync(
`gh issue comment ${issueNumber} --body "${reply.replace(/"/g, '\\\"')}"`
);
});
});
req.on('error', (e) => console.error(e));
req.write(data);
req.end();
Alright, let's break down this JavaScript code. This script is the brains of our operation, the piece that talks to the OpenAI API and posts the suggestion back to GitHub. It might look intimidating at first, but let's walk through it step by step.
const https = require('https');
: This line imports the built-inhttps
module in Node.js. We'll use this module to make an HTTPS request to the OpenAI API. Think of it as our telephone line to OpenAI.const { execSync } = require('child_process');
: This imports theexecSync
function from thechild_process
module. We'll use this to run shell commands, specifically to post a comment on the GitHub issue. It's like having a command-line interface within our script.const [title, body, issueNumber] = process.argv.slice(2);
: This line retrieves the arguments passed to our script. Remember those"${{ github.event.issue.title }}"
,"${{ github.event.issue.body }}"
, and"${{ github.event.issue.number }}"
in our workflow YAML? This is where those values come in.process.argv
is an array of command-line arguments, and we're slicing it to get the title, body, and issue number. It's like unpacking the information we received.const prompt =
Given this issue description, suggest the most likely next step or fix in one sentence.\n\nTitle: $title}\n\nBody;
: This is where we craft our prompt for the OpenAI API. The prompt is the instruction we give to the AI model. We're telling it to suggest the next step or fix for the issue, given the title and body. Notice how we're using template literals to include the title and body in the prompt. This is like writing a clear and concise email to the AI.const data = JSON.stringify({ ... });
: This is the payload we'll send to the OpenAI API. We're constructing a JSON object with the following properties:model: 'gpt-4o-mini'
: Specifies the OpenAI model we want to use. We're usinggpt-4o-mini
here, but you can choose other models depending on your needs.messages: [{ role: 'user', content: prompt }]
: This is where we include our prompt. We're sending a single message with the roleuser
and the content of our prompt.max_tokens: 60
: Limits the response length from the OpenAI API to 60 tokens. This helps control costs and ensures concise suggestions.
const options = { ... };
: This is where we define the options for our HTTPS request. We're specifying the hostname (api.openai.com
), path (/v1/chat/completions
), method (POST
), and headers. The headers are crucial for authentication and content type. Notice theAuthorization: Bearer ${process.env.OPENAI_API_KEY}
header. This is how we include our OpenAI API key in the request. Again, we're using an environment variable here, not hardcoding the key!const req = https.request(options, (res) => { ... });
: This is where we make the HTTPS request to the OpenAI API. We're usinghttps.request
to create a new request object. The first argument is ouroptions
object, and the second is a callback function that will be called when the response is received.let chunks = ''; res.on('data', (d) => (chunks += d)); res.on('end', () => { ... });
: This is how we handle the response from the OpenAI API. We're collecting the response data in chunks and then parsing it when the response ends.const reply = JSON.parse(chunks).choices[0].message.content.trim();
: This line extracts the suggestion from the OpenAI API response. We're parsing the JSON response, accessing thechoices
array, and then getting thecontent
of the first message. We're also trimming any extra whitespace.execSync(
gh issue comment {reply.replace(/"/g, '\"')}");
: This is where we post the suggestion as a comment on the GitHub issue. We're using thegh
CLI (GitHub CLI) to run theissue comment
command. We're passing the issue number and the suggestion as arguments. Notice thereply.replace(/"/g, '\\\"')
part. This is where we escape any double quotes in the suggestion so they don't break the command.req.on('error', (e) => console.error(e));
: This is how we handle errors during the HTTPS request. We're logging any errors to the console.req.write(data); req.end();
: These lines send the request to the OpenAI API. We're writing the data payload to the request and then ending the request.
Breaking Down the OpenAI API Interaction
Let's zoom in even further on how our script interacts with the OpenAI API. This is a critical piece of the puzzle, and understanding it will empower you to customize the script for your specific needs.
At its core, we're using the Chat Completions API provided by OpenAI. This API is designed for conversational AI, which makes it perfect for our use case of suggesting next steps or fixes for issues. We send a message (our prompt) to the API, and it responds with a generated message (our suggestion).
The data
object we construct is the heart of our request. Let's dissect it again:
model: 'gpt-4o-mini'
: This specifies the language model we want to use. OpenAI offers a variety of models, each with its own strengths and weaknesses.gpt-4o-mini
is a powerful and versatile model, but you might experiment with others likegpt-3.5-turbo
or even fine-tuned models for specific tasks. The choice of model directly impacts the quality and cost of the suggestions.messages: [{ role: 'user', content: prompt }]
: This is where we define the conversation history. In our case, we're sending a single message with therole
set touser
and thecontent
set to our prompt. Therole
is important because it tells the API who is speaking. In a more complex conversation, you might have messages with therole
set tosystem
(to provide initial instructions),user
(for user input), andassistant
(for the API's responses).max_tokens: 60
: This limits the length of the API's response. Tokens are roughly equivalent to words, somax_tokens: 60
means the API will generate a response that's no more than 60 words long. This is a crucial parameter for controlling costs and ensuring concise suggestions. You might need to adjust this value depending on the complexity of the issues you're dealing with.
The options
object defines the details of our HTTPS request. The hostname
and path
tell us where to send the request. The method
specifies that we're using a POST
request, which is the standard method for sending data to an API. The headers
are crucial for authentication and content type.
The Authorization: Bearer ${process.env.OPENAI_API_KEY}
header is especially important. This is where we include our OpenAI API key, which authenticates our request. Never hardcode your API key! Storing it as an environment variable (and then as a GitHub secret) is the secure way to go.
When we receive the response from the OpenAI API, we need to parse the JSON to extract the suggestion. The response structure looks something like this:
{
"choices": [
{
"message": {
"content": "The suggested next step is to investigate the database connection."
}
}
]
}
We're accessing the content
property of the first message in the choices
array. This is where the AI-generated suggestion is stored.
Securing Your OpenAI API Key
Before we move on, let’s talk about the elephant in the room: securing your OpenAI API key. You absolutely, positively DO NOT want to hardcode this key into your script or commit it to your repository. Doing so is a major security risk, as anyone who gains access to your repository could use your API key and rack up charges (or worse).
The solution is to use GitHub Secrets. GitHub Secrets are encrypted environment variables that you can store in your repository settings. These secrets are only accessible to GitHub Actions, so they’re a safe way to store sensitive information.
To store your OpenAI API key as a secret:
- Go to your repository on GitHub.
- Click on the “Settings” tab.
- Click on “Secrets and variables” in the left sidebar, then “Actions”.
- Click the “New repository secret” button.
- Enter
OPENAI_API_KEY
as the name of the secret. - Paste your OpenAI API key into the “Value” field.
- Click the “Add secret” button.
Now, your OpenAI API key is stored securely in your repository. You can access it in your workflow using the secrets
context, like we did in our YAML file: ${{ secrets.OPENAI_API_KEY }}
. This is a critical step for keeping your API key safe.
Storing Your OpenAI Key
Now, we need to store our OpenAI API key securely. Head over to your repository's settings, find the