How to Receive GitHub Webhooks on Localhost without a Proxy
Building local CI/CD scripts or GitHub bots? Learn how to use FetchHook to receive events from GitHub on your local machine securely and reliably.
GitHub Webhook Consumer (Python)
import requests
def listen_to_github():
# Pull latest GitHub events
res = requests.get("https://api.fetchhook.app/api/v1/stash_github_dev",
headers={"Authorization": "Bearer fh_live_xxx"})
for event in res.json().get('events', []):
payload = event['payload']
repo = payload.get('repository', {}).get('name')
action = payload.get('action')
print(f"GitHub Event: {action} on {repo}")#How do I connect GitHub to my local script?
Setup takes 3 minutes: (1) Go to your GitHub Repository Settings → Webhooks → Add webhook, (2) Payload URL: Enter your FetchHook ingress URL (https://api.fetchhook.app/in/stash_github_dev), (3) Content type: Select 'application/json', (4) Secret: Optional but recommended for security, (5) Events: Select which events to receive (Push, Pull requests, Issues, etc.), (6) Click 'Add webhook'. GitHub will immediately send a 'ping' event to verify the URL works. FetchHook responds instantly, and GitHub marks the webhook as active.
GitHub Webhook Configuration
GitHub Repo → Settings → Webhooks → Add webhook
Payload URL:
https://api.fetchhook.app/in/stash_github_dev
Content type:
application/json
Secret: (optional, for signature verification)
your_webhook_secret_here
Which events would you like to trigger this webhook?
○ Just the push event
○ Send me everything
● Let me select individual events:
✓ Pull requests
✓ Issues
✓ Issue comments
✓ Pull request reviews
✓ Workflow runs
Active: ✓
After adding, GitHub sends a "ping" event to test.
FetchHook responds instantly → Webhook is active!#How does FetchHook handle GitHub's strict timeouts?
GitHub has a strict 10-second timeout for webhook delivery. If your local script is slow to process an event (LLM call, API request, database query), GitHub will mark it as failed and retry. After several failures, GitHub disables your webhook. FetchHook solves this by accepting the event in <100ms and returning HTTP 202 immediately, satisfying GitHub's timeout. Your script can then take all the time it needs to process the payload locally—30 seconds, 5 minutes, doesn't matter. No retries, no failures, no disabled webhooks.
The Timeout Solution
Traditional (ngrok to localhost):
GitHub sends PR webhook → ngrok (200ms) → localhost (processing 15s)
→ GitHub timeout @ 10s → Marks as failed → Retry
→ Retry times out again → GitHub disables webhook
FetchHook:
GitHub sends PR webhook → FetchHook (50ms) → HTTP 202 success
→ GitHub happy, marks as delivered
→ Your script pulls when ready
→ Process PR (15s) → No timeout pressure#Which GitHub events should I listen for?
Most GitHub automation uses 5-10 core events: (1) push - Code pushed to repo, trigger CI/CD, (2) pull_request (opened, synchronize, closed) - PR lifecycle events, (3) issues (opened, closed, labeled) - Issue tracking automation, (4) issue_comment - Comments on issues or PRs, (5) pull_request_review - PR review submitted, (6) release (published) - New release created, (7) workflow_run (completed) - GitHub Actions workflow finished. For GitHub Apps, add installation and installation_repositories events.
Event Handler Pattern (Python)
import requests
import os
FETCHHOOK_API_KEY = os.getenv("FETCHHOOK_API_KEY")
# Event handlers
def handle_pull_request(payload):
action = payload['action']
pr = payload['pull_request']
if action == 'opened':
print(f"New PR opened: {pr['title']}")
# Run automated code review
review_pr_with_llm(pr)
elif action == 'closed' and pr.get('merged'):
print(f"PR merged: {pr['title']}")
# Trigger deployment
deploy_to_staging()
def handle_issues(payload):
action = payload['action']
issue = payload['issue']
if action == 'opened':
print(f"New issue: {issue['title']}")
# Auto-triage with labels
auto_label_issue(issue)
def handle_issue_comment(payload):
comment = payload['comment']
issue = payload['issue']
print(f"Comment on issue #{issue['number']}: {comment['body'][:50]}")
# Check for bot commands
if '@bot' in comment['body']:
process_bot_command(comment, issue)
# Router
def process_github_event(event):
event_type = event.get('metadata', {}).get('github_event')
payload = event['payload']
handlers = {
'pull_request': handle_pull_request,
'issues': handle_issues,
'issue_comment': handle_issue_comment,
'push': handle_push,
}
handler = handlers.get(event_type)
if handler:
handler(payload)
else:
print(f"Unhandled event: {event_type}")
# Main loop
def listen_to_github():
response = requests.get(
"https://api.fetchhook.app/api/v1/stash_github_dev",
headers={"Authorization": f"Bearer {FETCHHOOK_API_KEY}"}
)
events = response.json().get('events', [])
for event in events:
if event.get('signature_verified'):
process_github_event(event)
listen_to_github()#How do I verify GitHub webhook signatures?
GitHub signs webhooks with HMAC-SHA256 using your webhook secret (the 'Secret' field when creating the webhook). The signature is sent in the X-Hub-Signature-256 header. FetchHook automatically verifies this signature at ingress if you've configured your webhook secret. Just check event['signature_verified'] is true before processing. This prevents attackers from sending fake GitHub events to trigger malicious actions in your automation.
#What are common use cases for local GitHub webhooks?
Local GitHub webhook automation is perfect for: (1) Automated code review bots - PR opened triggers LLM code analysis, (2) Custom CI/CD pipelines - Push triggers local build and test scripts, (3) Issue triage automation - New issue triggers auto-labeling and assignment, (4) Release automation - Release published triggers changelog generation and deployment, (5) GitHub bot development - Test bot logic locally before deploying, (6) Notification routing - GitHub events trigger Slack/Discord/email notifications. All without exposing your local dev environment to the internet.
Automated Code Review Bot (Example)
import openai
import requests
def review_pr_with_llm(pr):
"""
Automated code review bot triggered by PR webhook.
Uses LLM to analyze code changes.
"""
# Fetch PR diff
diff_url = pr['diff_url']
diff = requests.get(diff_url).text
# Analyze with LLM (takes 10-30 seconds - no timeout with FetchHook)
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[{
"role": "user",
"content": f"Review this code change and suggest improvements:\n\n{diff}"
}]
)
review_comment = response.choices[0].message.content
# Post review comment on PR via GitHub API
comments_url = pr['comments_url']
requests.post(comments_url, json={
"body": f"🤖 Automated Code Review:\n\n{review_comment}"
}, headers={
"Authorization": f"Bearer {os.getenv('GITHUB_TOKEN')}"
})
print(f"Posted review on PR #{pr['number']}")
# Webhook trigger
def handle_pull_request(payload):
if payload['action'] == 'opened':
review_pr_with_llm(payload['pull_request'])#How do I test GitHub webhooks locally?
Testing is easy: (1) Create test events by performing actions in GitHub (open an issue, create a PR, push code), (2) Use GitHub's 'Redeliver' button in Settings → Webhooks → Recent Deliveries to replay events, (3) Use GitHub API to create test events programmatically. All webhooks go to FetchHook. Your local script pulls and processes them. You can restart your dev environment as many times as needed—events wait in the mailbox. Perfect for debugging webhook handlers without polluting production.
Local Testing Workflow
# Terminal 1: Run your GitHub automation script
python github_bot.py
# Terminal 2: Trigger test events
# Option 1: Manual actions in GitHub UI
# - Open a test issue
# - Create a test PR
# - Add a comment
# Option 2: Use GitHub API to create events
curl -X POST https://api.github.com/repos/user/repo/issues \
-H "Authorization: Bearer YOUR_GITHUB_TOKEN" \
-d '{"title":"Test issue for webhook testing"}'
# Option 3: Redeliver previous webhook
# GitHub Settings → Webhooks → Recent Deliveries → Redeliver
# Your script pulls and processes automatically#How do I deploy GitHub webhooks to production?
For production: (1) Create a separate FetchHook stash (stash_github_prod), (2) Update GitHub webhook config to point to prod stash URL, (3) Deploy your script as a serverless function (AWS Lambda, Cloud Run) triggered by scheduler every 1-5 minutes, or as a long-running background worker. Use separate GitHub tokens and API keys for prod. For high-traffic repos, consider running multiple workers pulling from the same mailbox for parallel processing.
Production Serverless Deployment
# AWS Lambda handler (triggered by EventBridge every 2 minutes)
import boto3
import requests
import os
def lambda_handler(event, context):
# Pull GitHub events from FetchHook
response = requests.get(
f"https://api.fetchhook.app/api/v1/{os.getenv('SOURCE_ID')}",
headers={"Authorization": f"Bearer {os.getenv('FETCHHOOK_API_KEY')}"}
)
events = response.json().get('events', [])
print(f"Processing {len(events)} GitHub events")
for event in events:
if event.get('signature_verified'):
process_github_event(event)
return {
'statusCode': 200,
'body': f'Processed {len(events)} events'
}
# Environment variables:
# SOURCE_ID=stash_github_prod
# FETCHHOOK_API_KEY=fh_prod_xxx
# GITHUB_TOKEN=ghp_xxx (for GitHub API calls)#How do I monitor GitHub webhook health?
Monitor from both GitHub and your processor: (1) GitHub Settings → Webhooks shows delivery success rate and recent deliveries, (2) Your logs should track: events pulled per run, events processed successfully, signature verification failures, processing errors. Set up alerts for: GitHub delivery failures (check webhook config), No events pulled in 24 hours (webhook might be disabled or no repo activity), High processing error rate (bug in handler code). GitHub emails you if webhooks are disabled due to repeated failures.
Agent Protocol Instruction