HomeAbout MeBook a Call

Real Time NOC Alerting Sync PagerDuty and Jira to Google Chat via Apps Script

By Vo Tu Duc
March 29, 2026
Real Time NOC Alerting Sync PagerDuty and Jira to Google Chat via Apps Script

Constantly pivoting between disjointed tools during a critical incident doesn’t just waste valuable engineering time; it actively destroys SLAs, customer trust, and team morale. Learn how fragmented communication is secretly sabotaging your Network Operations Center and why unifying your response workflow is essential when every millisecond counts.

image 0

The Cost of Fragmented Incident Communication

In the high-stakes environment of a modern Network Operations Center (NOC), information is your most valuable currency. However, when that critical information is scattered across disparate platforms, it quickly transforms from an operational asset into a severe liability. Today’s cloud-native architectures generate a relentless stream of telemetry, logs, and alerts. When your incident response workflow requires engineers to constantly pivot between PagerDuty for on-call paging, Jira for issue tracking, and Google Chat for team collaboration, the resulting fragmentation creates a dangerous lag in incident resolution. The true cost of this fragmentation isn’t just measured in wasted engineering hours—it is measured in breached Service Level Agreements (SLAs), degraded customer trust, and accelerating team burnout.

Why Milliseconds Matter in Network Operations

In cloud engineering, downtime is an expensive luxury no organization can afford. Whether you are routing traffic through global load balancers in Google Cloud or orchestrating a massive fleet of microservices, network anomalies can cascade into catastrophic outages in the blink of an eye. In a NOC, milliseconds matter. Metrics like Mean Time to Acknowledge (MTTA) and Mean Time to Resolution (MTTR) are not just vanity dashboards; they are the foundational lifelines of your system’s reliability.

When a critical severity (SEV-1) incident strikes, the cognitive load on an on-call engineer spikes instantly. Every second spent hunting for the newly generated Jira ticket ID, or cross-referencing a PagerDuty incident payload in a separate browser tab, is a second stolen from actual triage and mitigation. Rapid incident response relies on immediate, shared situational awareness. If your team is delayed by even a few minutes because an alert was buried in an email inbox rather than pushed directly to their active workspace, the blast radius of the outage expands exponentially. In the modern cloud landscape, speed is the ultimate mitigant, and speed requires seamless data flow.

The Pitfalls of Siloed Alerting Systems

Traditionally, alerting, ticketing, and communication have existed in isolated silos. PagerDuty excels at waking up the right person, and Jira remains the gold standard for tracking the lifecycle of an outage. However, when these systems operate independently of your team’s primary communication hub—Google Chat—you force your engineers into a “swivel-chair” operational model.

image 1

This siloed approach introduces several critical pitfalls that can cripple a NOC’s effectiveness:

  • The Context-Switching Tax: Engineers are forced to jump frantically between applications. This constant context-switching breaks focus, increases cognitive fatigue, and drastically raises the likelihood of human error during high-pressure mitigation efforts.

  • Information Asymmetry: If an engineer acknowledges a PagerDuty alert on their phone but doesn’t manually update the team in Google Chat or transition the Jira ticket state, the rest of the NOC is left in the dark. This lack of transparency often leads to duplicated efforts, where multiple engineers unknowingly troubleshoot the exact same issue.

  • Alert Fatigue and “White Noise”: When alerts lack actionable, cross-platform context, or are delivered through passive channels, they quickly become white noise. Without a centralized feed, critical anomalies can easily slip through the cracks.

  • Complicated Post-Mortems: Without a unified ChatOps workflow, post-incident reviews become a forensic nightmare. Incident commanders are forced to manually piece together timestamps and actions from PagerDuty logs, Jira histories, and disjointed chat threads to figure out what actually happened.

To build a resilient, highly responsive NOC, these silos must be dismantled. The data must go to where the engineers already live and collaborate.

Architecting a Real Time Alert Middleware

In a high-velocity Network Operations Center (NOC), relying on native, out-of-the-box integrations often leads to rigid workflows and alert fatigue. By leveraging AI Powered Cover Letter Automation Engine as a serverless middleware, we can build a highly customizable bridge between your incident management tools (PagerDuty and Jira) and your communication hub (Google Chat). This architecture requires zero dedicated infrastructure, scales effortlessly, and executes within the secure boundary of your Automatically create new folders in Google Drive, generate templates in new folders, fill out text automatically in new files, and save info in Google Sheets environment.

The goal of this middleware is simple but critical: ingest incoming HTTP payloads, evaluate their operational importance, and route them to the appropriate responders in near real-time.

Core Logic Webhook Listener to Apps Script

At the heart of this architecture is the Genesis Engine AI Powered Content to Video Production Pipeline Web App functionality. By defining a doPost(e) function, Apps Script can act as a public-facing webhook endpoint capable of receiving POST requests from external services like PagerDuty and Jira.

When an incident is triggered, updated, or resolved, the source system fires a JSON payload to your Apps Script URL. The core logic must first intercept this event, parse the raw text into a manageable JavaScript object, and normalize the data. Because PagerDuty and Jira structure their JSON payloads differently, your listener needs a routing mechanism right at the entry point to determine the origin of the webhook.

Here is a conceptual look at how the listener handles the incoming stream:


function doPost(e) {

try {

// Parse the incoming JSON payload

const payload = JSON.parse(e.postData.contents);

// Identify the source (e.g., Jira uses 'webhookEvent', PagerDuty uses 'messages')

if (payload.webhookEvent) {

handleJiraPayload(payload);

} else if (payload.messages && payload.messages[0].event) {

handlePagerDutyPayload(payload);

} else {

console.warn("Unrecognized payload structure.");

}

// Return a 200 OK to acknowledge receipt and prevent retries

return ContentService.createTextOutput("Success").setMimeType(ContentService.MimeType.TEXT);

} catch (error) {

console.error("Error processing webhook: " + error.toString());

return ContentService.createTextOutput("Error").setMimeType(ContentService.MimeType.TEXT);

}

}

This lightweight listener ensures that the external systems immediately receive a 200 OK response, preventing webhook timeouts and unnecessary retry storms, while the script continues processing the data asynchronously.

Filtering Alert Severity for Precision Routing

Alert fatigue is the silent killer of NOC efficiency. If every minor Jira ticket update or low-priority PagerDuty informational event triggers a ping in Google Chat, engineers will quickly learn to ignore the channel. To prevent this, our middleware must act as a ruthless filter.

Once the payload is parsed, the script inspects the severity, priority, or event type. For PagerDuty, you might only care about incident.trigger events with a severity of error or critical. For Jira, you might filter for issues created in the NOC project with a Priority of Highest or High.

By implementing strict conditional logic, we ensure that only actionable, high-impact alerts make it through the middleware:


function isAlertActionable(incidentData, source) {

if (source === 'PagerDuty') {

const severity = incidentData.log_entries[0].channel.urgency; // e.g., 'high' or 'low'

const status = incidentData.status; // e.g., 'triggered', 'acknowledged', 'resolved'

// Only forward High urgency triggers or resolutions

return (severity === 'high' && (status === 'triggered' || status === 'resolved'));

}

if (source === 'Jira') {

const priority = incidentData.issue.fields.priority.name;

// Drop anything below High priority

return (priority === 'Highest' || priority === 'High');

}

return false;

}

This precision filtering guarantees that when a Google Chat notification chimes, the on-call engineers know it requires their immediate attention.

Mapping Alerts to Specific Google Chat Spaces

A centralized “firehose” channel for all alerts is rarely effective in a mature engineering organization. Database alerts should go to the DBA team, security anomalies to SecOps, and critical application outages to the main incident command space.

To achieve this, the middleware utilizes a routing table—a simple mapping of alert attributes to specific Google Chat Webhook URLs. By inspecting the payload for specific routing keys (such as the PagerDuty service.summary or the Jira project.key), the Apps Script dynamically determines the correct destination.


const CHAT_SPACES = {

"DB_ALERTS": "https://chat.googleapis.com/v1/spaces/XXXX/messages?key=YYYY&token=ZZZZ",

"SEC_OPS": "https://chat.googleapis.com/v1/spaces/AAAA/messages?key=BBBB&token=CCCC",

"NOC_MAIN": "https://chat.googleapis.com/v1/spaces/DDDD/messages?key=EEEE&token=FFFF"

};

function routeAlert(incident) {

let webhookUrl = CHAT_SPACES["NOC_MAIN"]; // Default fallback

// Route based on PagerDuty Service Name

if (incident.service_name.includes("Database") || incident.service_name.includes("PostgreSQL")) {

webhookUrl = CHAT_SPACES["DB_ALERTS"];

}

// Route based on Jira Project Key

else if (incident.project_key === "SEC") {

webhookUrl = CHAT_SPACES["SEC_OPS"];

}

// Construct and send the Google Chat Card Message

sendToGoogleChat(webhookUrl, incident);

}

This mapping strategy transforms Google Chat from a noisy catch-all into a highly organized, context-aware incident response platform. By decoupling the source systems from the destination spaces, you can easily add new teams, update chat spaces, or change routing rules entirely within the Apps Script environment, without ever touching the configuration in PagerDuty or Jira.

Implementing the Tech Stack

To build a resilient, real-time NOC alerting pipeline, we need to establish a seamless flow of data from our incident management systems into our communication hub. This architecture relies on three core pillars: event-driven webhooks from PagerDuty and Jira, a serverless middleware built on Architecting Multi Tenant AI Workflows in Google Apps Script to normalize the data, and the Google Chat API to deliver rich, actionable alerts.

Let’s dive into the technical implementation of each component.

Configuring PagerDuty and Jira Webhooks

The first step is configuring our source systems to push event payloads out to our middleware whenever a critical action occurs. Rather than polling for changes—which consumes unnecessary API quota and introduces latency—we will use webhooks to achieve true real-time synchronization.

PagerDuty Webhooks (v3)

In PagerDuty, navigate to Integrations > Generic Webhooks (v3). You will want to create a new webhook subscription tied to your specific NOC services or an entire team.

  1. Set the Webhook URL to the endpoint we will generate in the Apps Script step.

  2. Select the specific events you want to track. For a NOC environment, you typically want incident.triggered, incident.acknowledged, and incident.resolved.

  3. PagerDuty will send a robust JSON payload containing the incident ID, urgency, assignee, and HTML-formatted descriptions.

Jira Webhooks

For Jira, head over to Jira Settings > System > Webhooks.

  1. Create a new webhook and point it to the same Apps Script URL.

  2. Use JQL (Jira Query Language) to filter the noise. You don’t want every minor ticket update pinging the NOC. A good JQL filter might look like: project = NOC AND priority in (Highest, High).

  3. Under the Events section, check the boxes for Issue created, updated, and deleted.

Note: Both systems allow you to set a secret token for payload verification. While we will keep the code examples below streamlined, a production-grade Cloud Engineering setup should always validate the HMAC signature of incoming webhooks to prevent spoofing.

Building the Apps Script Middleware

Google Apps Script acts as our serverless glue. It’s a highly scalable, zero-maintenance environment perfect for catching webhooks, transforming JSON payloads, and routing them to AC2F Streamline Your Google Drive Workflow services.

To handle incoming webhooks, we must utilize the reserved doPost(e) function. This function will parse the incoming POST request, determine whether the source is PagerDuty or Jira, extract the relevant data, and normalize it into a standard alert object.

Here is the foundational code for your Apps Script middleware:


function doPost(e) {

try {

// Parse the incoming webhook payload

const payload = JSON.parse(e.postData.contents);

let alertData = {};

// Routing logic: Identify the source of the webhook

if (payload.messages && payload.messages[0].event.startsWith('incident.')) {

// Handle PagerDuty Payload

const incident = payload.messages[0].incident;

alertData = {

source: 'PagerDuty',

title: incident.title,

status: incident.status.toUpperCase(),

url: incident.html_url,

description: incident.summary,

color: incident.status === 'resolved' ? '#34A853' : '#EA4335' // Green for resolved, Red for active

};

} else if (payload.webhookEvent && payload.webhookEvent.startsWith('jira:issue_')) {

// Handle Jira Payload

const issue = payload.issue;

alertData = {

source: 'Jira',

title: `${issue.key}: ${issue.fields.summary}`,

status: issue.fields.status.name.toUpperCase(),

url: `https://your-domain.atlassian.net/browse/${issue.key}`,

description: issue.fields.description || 'No description provided.',

color: '#4285F4' // Google Blue for Jira updates

};

} else {

// Ignore unhandled events

return ContentService.createTextOutput("Event ignored").setStatusCode(200);

}

// Forward the normalized data to Google Chat

sendToGoogleChat(alertData);

// Return a 200 OK to the source system

return ContentService.createTextOutput("Webhook processed successfully").setStatusCode(200);

} catch (error) {

console.error("Middleware Error: ", error);

return ContentService.createTextOutput("Internal Server Error").setStatusCode(500);

}

}

Pro-tip: Once you write this code, deploy it by clicking* Deploy > New Deployment**, select Web App, and set access to “Anyone”. The resulting URL is the endpoint you will paste into your PagerDuty and Jira webhook configurations.*

Integrating the Google Chat API

With our data normalized, the final step is formatting and delivering it to the NOC team’s Google Chat space. While you could send plain text, leveraging Google Chat’s Card V2 API provides a significantly better user experience. Cards allow us to use structured layouts, custom colors, and interactive buttons (like a direct link to acknowledge the incident).

First, generate an Incoming Webhook URL in your target Google Chat space (Space Settings > Apps & integrations > Webhooks). Store this URL securely in Apps Script Properties Service (File > Project Properties > Script Properties) under the key CHAT_WEBHOOK_URL.

Next, implement the sendToGoogleChat function in your Apps Script project to construct the Card V2 payload and push it via UrlFetchApp:


function sendToGoogleChat(alertData) {

// Retrieve the secure webhook URL from Script Properties

const chatWebhookUrl = PropertiesService.getScriptProperties().getProperty('CHAT_WEBHOOK_URL');

if (!chatWebhookUrl) {

console.error("Chat Webhook URL is missing.");

return;

}

// Construct a Google Chat Card V2 payload

const messagePayload = {

"cardsV2": [

{

"cardId": "noc-alert-card",

"card": {

"header": {

"title": alertData.title,

"subtitle": `Source: ${alertData.source} | Status: ${alertData.status}`,

"imageType": "CIRCLE",

"imageUrl": alertData.source === 'PagerDuty'

? "https://www.pagerduty.com/wp-content/uploads/2022/03/PagerDuty-Icon-Green.png"

: "https://cdn.icon-icons.com/icons2/2699/PNG/512/atlassian_jira_logo_icon_170511.png"

},

"sections": [

{

"widgets": [

{

"textParagraph": {

"text": `<b>Details:</b><br>${alertData.description.substring(0, 250)}...`

}

},

{

"buttonList": {

"buttons": [

{

"text": "View in System",

"onClick": {

"openLink": {

"url": alertData.url

}

}

}

]

}

}

]

}

]

}

}

]

};

const options = {

method: 'post',

contentType: 'application/json',

payload: JSON.stringify(messagePayload),

muteHttpExceptions: true

};

// Execute the API call to Google Chat

const response = UrlFetchApp.fetch(chatWebhookUrl, options);

console.log(`Google Chat API Response: ${response.getResponseCode()}`);

}

By utilizing the cardsV2 schema, the Google Chat API transforms the raw JSON from PagerDuty and Jira into a highly visible, interactive alert. The NOC team instantly sees the source, status, and summary, complete with a call-to-action button that drops them directly into the specific ticket or incident to begin remediation.

Deploying and Testing the Alert Pipeline

With the integration logic written and our webhook handlers configured to parse incoming JSON payloads, it is time to deploy our Google Apps Script as a live Web App. To do this, navigate to the Deploy button in the Apps Script editor, select New deployment, and choose Web app. Crucially, ensure that the “Who has access” setting is configured to Anyone; this allows unauthenticated POST requests from PagerDuty and Jira to successfully reach your endpoint.

Once deployed, you will receive a unique Web App URL. You will paste this URL into the webhook configuration settings of both your PagerDuty service and your Jira project. However, deploying the code is only half the battle. In a Network Operations Center (NOC) environment, alerting pipelines are mission-critical. You must rigorously validate that the pipeline can handle the heat of a real outage.

Simulating High Severity Incidents

You should never wait for an actual production outage to verify that your alerting pipeline works. Simulating high-severity incidents ensures that your Google Chat spaces receive the right information, with the correct formatting, exactly when it matters most.

To thoroughly test the pipeline, you should simulate payloads from both sources:

  1. Mocking PagerDuty Webhooks: While you can trigger a test incident directly from the PagerDuty UI in a sandbox service, a more controlled method for developers is to use a tool like Postman or cURL. By sending a mock incident.trigger payload directly to your Apps Script Web App URL, you can rapidly iterate on your Google Chat Card formatting.

curl -X POST -H "Content-Type: application/json" \

-d '{"messages":[{"event":"incident.trigger","incident":{"id":"P123456","title":"CRITICAL: Database Cluster Down","urgency":"high"\}\}]}' \

"YOUR_APPS_SCRIPT_WEB_APP_URL"

  1. Triggering Jira Transitions: In Jira, set up a test project equipped with the exact workflow and webhook configurations as your production environment. Create a dummy ticket and escalate its priority to “Highest” or transition it to a “Blocker” status.

  2. Validating the Output: Switch over to your designated Google Chat NOC space. Verify that the incident card renders correctly. Are the interactive buttons linking back to the specific PagerDuty incident and Jira issue functional? Is the visual hierarchy clear (e.g., using red headers for P1 alerts)? Ensure that the payload parsing didn’t drop any critical context, such as the assignee or the affected service.

Monitoring Apps Script Quotas and Execution Times

Google Apps Script provides a fantastic, serverless glue for Automated Client Onboarding with Google Forms and Google Drive. and Cloud integrations, but it is not without its limits. As a Cloud Engineer, you must architect for scale and monitor your resource consumption to prevent silent failures during an alert storm.

First, be aware of Apps Script Quotas. For Automated Discount Code Management System Enterprise accounts, the UrlFetchApp service (which we use to push messages to the Google Chat API) has a generous limit of 100,000 calls per day. However, consumer or lower-tier accounts are capped at 20,000. If your NOC experiences a massive cascade of alerts, you could theoretically hit these limits, causing subsequent alerts to drop.

Execution time is another critical metric. Apps Script has a hard execution limit of 6 minutes per run, but in the context of webhooks, speed is much more constrained by the sender. PagerDuty and Jira expect a 200 OK response almost immediately. If your script takes too long to process the payload and post to Google Chat, the source systems will register a timeout and may attempt to retry the webhook, leading to duplicate alerts in your chat space.

To effectively monitor this pipeline:

  • Use the Executions Dashboard: Regularly check the Executions tab in the Apps Script editor. Look for Failed or Timed Out statuses. Pay attention to the “Duration” column; your webhook executions should ideally complete in under 1-2 seconds.

  • Link a Standard GCP Project: By default, Apps Script uses a hidden default Google Cloud project. For enterprise-grade monitoring, associate your script with a standard Google Cloud Project. This allows you to route all console.log() and console.error() outputs directly into Google Cloud Logging (formerly Stackdriver).

  • Set up Meta-Alerting: Once your logs are flowing into Cloud Logging, you can create Log-Based Metrics. Set up an alert policy in Google Cloud Monitoring to notify your team (perhaps via a secondary channel or direct email) if the Apps Script begins throwing HTTP 500 errors or exceeding expected execution times. Monitoring the monitor is a cornerstone of reliable NOC engineering.

Scale Your NOC Architecture

Once you have the baseline PagerDuty and Jira to Google Chat integration running smoothly via Apps Script, you will quickly realize the transformative power of real-time ChatOps. However, as your organization grows, so does your operational complexity. An Apps Script webhook receiver that perfectly handles ten alerts a day might hit execution limits or quota restrictions when faced with a massive incident storm generating thousands of payloads per minute.

Scaling your NOC (Network Operations Center) architecture means transitioning from a functional integration to an enterprise-grade, highly available alerting ecosystem. As your infrastructure expands, you must start thinking like a Cloud Engineer: decoupling services, implementing message dead-letter queues (such as Google Cloud Pub/Sub), and potentially migrating heavy webhook processing to Google Cloud Functions or Cloud Run. This ensures zero dropped alerts, guaranteed delivery, and seamless synchronization between your incident management tools and your communication hubs during critical outages.

Audit Your Specific Business Needs

Scaling isn’t a one-size-fits-all endeavor; it requires a strategic evaluation of your current operations. Before provisioning new infrastructure or refactoring your Apps Script logic into microservices, you must comprehensively audit your specific business needs.

Start by analyzing your alert volume and frequency. Are there distinct peak hours, or is the load highly unpredictable? Evaluate your Mean Time To Acknowledge (MTTA) and Mean Time To Resolution (MTTR) metrics to identify where the bottlenecks exist in your current workflow.

You also need to evaluate the following critical areas:

  • Security and Compliance: Do your Jira tickets or PagerDuty payloads contain PII or sensitive infrastructure data? If so, you may need to implement VPC Service Controls, strict IAM policies, or Secret Manager when routing data through Google Cloud.

  • Team Topology: A globally distributed NOC team might require advanced routing logic, multi-space Google Chat broadcasting based on severity, and automated shift-handoff summaries.

  • Actionability: Are your alerts actionable? An audit should determine if you need to add interactive Google Chat Cards with buttons that allow engineers to acknowledge a PagerDuty incident or transition a Jira ticket directly from the chat interface.

By clearly defining these parameters, you ensure that your scaled architecture solves real operational pain points rather than simply adding unnecessary technical debt.

Book a GDE Discovery Call with Vo Tu Duc

Navigating the complexities of enterprise Cloud Engineering, Automated Email Journey with Google Sheets and Google Analytics integrations, and advanced NOC automation can be daunting. If you want to ensure your architecture is built on proven best practices and tailored to your exact operational requirements, expert guidance is an invaluable asset.

Take the guesswork out of your scaling journey by booking a discovery call with Vo Tu Duc, a recognized Google Developer Expert (GDE). With deep, specialized expertise across Google Cloud and the Automated Google Slides Generation with Text Replacement ecosystem, Vo Tu Duc can help you architect a resilient, high-throughput alerting system that seamlessly bridges PagerDuty, Jira, and Google Chat.

During this discovery session, you will:

  • Review your current incident management infrastructure and Apps Script deployments.

  • Identify potential scaling bottlenecks and security vulnerabilities.

  • Map out a strategic, event-driven architecture plan that aligns with your specific business objectives and MTTR goals.

Whether you need to optimize your existing scripts or migrate to a robust, serverless Google Cloud architecture, connecting with a GDE is the most effective way to future-proof your NOC operations.


Tags

NOCIncident ManagementPagerDutyJiraGoogle ChatGoogle Apps ScriptAutomation

Share


Previous Article
Real Time Retail Sentiment Analysis Using Vertex AI and BigQuery
Vo Tu Duc

Vo Tu Duc

A Google Developer Expert, Google Cloud Innovator

Stop Doing Manual Work. Scale with AI.

Hi, I'm Vo Tu Duc (Danny), a recognised Google Developer Expert (GDE). I architect custom AI agents and Google Workspace solutions that help businesses eliminate chaos and save thousands of hours.

Want to turn these blog concepts into production-ready reality for your team?
Book a Discovery Call

Table Of Contents

Portfolios

AI Agentic Workflows
Change Management
AppSheet Solutions
Strategy Playbooks
Cloud Engineering
Product Showcase
Uncategorized
Workspace Automation

Related Posts

Auto Generating Maintenance Manuals From Technical Specs Using Gemini
March 29, 2026
© 2026, All Rights Reserved.
Powered By

Quick Links

Book a CallAbout MeVolunteer Legacy

Social Media