Workflow Library 15 Templates

n8n Marketing Automation Workflows for Growth Teams

Fifteen battle-tested n8n workflows covering lead generation, nurturing, ad ops, reporting, and social monitoring. Each template includes the exact nodes, trigger logic, and estimated setup time so you can deploy in an afternoon.


01

Why n8n for Marketing Automation

Most marketing teams default to Zapier or Make.com because they are easy to start with. The problem surfaces at scale. Zapier charges per task execution, and a single workflow that checks Reddit every 4 hours, enriches leads, scores them, and pushes to a CRM can burn through 5,000+ tasks per month. At Zapier's Team plan pricing, that is $69/month for a single workflow. Multiply by 15 workflows and you are looking at infrastructure costs that rival a junior hire. n8n, self-hosted on a $20/month VPS, runs unlimited executions with zero per-task fees. For growth teams running high-frequency automations, this is the deciding factor.

Beyond cost, n8n gives you something Zapier and Make.com cannot: full code-level control inside a visual builder. You can write custom JavaScript in Function nodes, build recursive loops with sub-workflows, and deploy conditional branching logic that would require three separate Zaps to replicate. If your marketing stack involves custom API endpoints, webhook transformations, or AI processing with specific prompt chains, n8n handles it natively. Make.com comes close on flexibility but lacks n8n's self-hosting capability, which matters when you process sensitive lead data that cannot leave your infrastructure.

The tradeoff is real: n8n requires more technical setup than Zapier. You need a server, basic Docker knowledge, and comfort with JSON. But for any team already running 10+ automations, the migration pays for itself within the first month. The workflows in this guide are designed to get you there faster.

02

The n8n Marketing Stack

n8n connects to 400+ services via built-in nodes. For marketing automation, these are the integrations that matter most and the ones referenced throughout the workflows below.

CRM Layer

HubSpot, Salesforce, Pipedrive. Use the HubSpot node for contact creation, deal updates, and lifecycle stage changes. Salesforce connects via OAuth2 for enterprise pipelines.

Ad Platforms

Meta Ads and Google Ads via their respective API nodes. Pull spend, impressions, CTR, and conversions. Use HTTP Request node for platforms without native nodes (LinkedIn Ads, TikTok Ads).

Analytics

GA4 via the Google Analytics node (Data API). Mixpanel and Amplitude via HTTP Request nodes with API key auth. Pull event counts, funnel metrics, and cohort data.

Communication

Slack (webhook or OAuth), Gmail/SMTP for email sequences, Telegram for mobile alerts. Slack is the default notification layer in most workflows here.

03

Lead Generation Workflows

Reddit Lead Mining

~30 min setup

Monitors target subreddits for posts matching your ICP, classifies them with GPT-4o-mini into lead/content/question categories, scores relevancy, and routes qualified leads to Google Sheets + Slack.

Nodes: Schedule Trigger → Set (subreddit list) → HTTP Request (Reddit API) → OpenAI (classifier) → Switch → Google Sheets + Slack
Trigger: Schedule, every 4 hours
Full Guide & JSON Download

LinkedIn Profile Enrichment Pipeline

~45 min setup

Takes a list of LinkedIn profile URLs from a Google Sheet, enriches each through a data provider API (Proxycurl or RocketReach), extracts job title, company size, and industry, then pushes enriched contacts into HubSpot with custom properties.

Nodes: Schedule Trigger → Google Sheets (read URLs) → HTTP Request (Proxycurl API) → Function (normalize data) → IF (filter by company size) → HubSpot (create/update contact)
Trigger: Schedule, daily at 6:00 AM

Key detail: Use a Function node to map Proxycurl's company.employee_count field into HubSpot's company size property. Set the IF node to filter for companies with 50-500 employees to focus on mid-market leads.

Website Visitor Intent Scoring

~60 min setup

Receives webhook events from your website (page views, form starts, pricing page visits), aggregates them per visitor session using a Function node, calculates a behavioral intent score, and creates a HubSpot contact when the score exceeds your threshold.

Nodes: Webhook (receives tracking events) → Function (session aggregation + scoring) → IF (score ≥ 40) → HubSpot (create contact) → Slack (alert sales)
Trigger: Webhook (real-time)

Scoring logic: Assign points per action: pricing page = 15, case study = 10, blog = 3, form start = 20, documentation = 5. Sum per session. Threshold of 40 catches visitors who hit pricing + at least one case study.

04

Lead Nurturing Workflows

Lead Score Email Sequence Trigger

~40 min setup

Listens to HubSpot contact property changes via webhook. When a lead's score crosses defined thresholds (e.g., 30, 60, 90), it triggers the corresponding email sequence in your ESP. Prevents duplicate enrollments by checking a custom HubSpot property before sending.

Nodes: Webhook (HubSpot property change) → HubSpot (get contact details) → Switch (score ranges: 30-59, 60-89, 90+) → IF (check enrolled_sequence property) → HTTP Request (ESP enrollment API) → HubSpot (update enrolled flag)
Trigger: Webhook (HubSpot workflow enrollment or property change subscription)

CRM Stage-Based Slack Notifications

~20 min setup

Fires a formatted Slack message to the appropriate channel whenever a deal moves stages in HubSpot. Maps each pipeline stage to a specific Slack channel (e.g., #new-leads, #proposals, #closed-won) and includes deal value, contact name, and days in previous stage.

Nodes: Webhook (HubSpot deal stage change) → HubSpot (get deal + associated contact) → Switch (stage mapping) → Slack (post to mapped channel)
Trigger: Webhook (real-time)

Dormant Lead Re-engagement Trigger

~35 min setup

Runs daily, queries HubSpot for contacts with no activity in the last 30 days (no email opens, no page views, no form fills). Segments them by original lead source and triggers a tailored re-engagement email via your ESP. Contacts inactive for 90+ days are moved to a dormant lifecycle stage.

Nodes: Schedule Trigger → HubSpot (search contacts, filter by last_activity_date < 30 days ago) → Switch (by lead source) → HTTP Request (ESP re-engagement campaign) → IF (inactive > 90 days) → HubSpot (update lifecycle stage)
Trigger: Schedule, daily at 8:00 AM
05

Ad Campaign Automation Workflows

These workflows help you catch budget overruns, creative fatigue, and performance drops before they waste spend. If you manage ad budgets directly, see the budget planning resources for the forecasting models that pair with these alerts.

Daily Budget Pacing Alerts

~30 min setup

Pulls yesterday's spend from Meta Ads and Google Ads APIs, compares against your daily budget target (monthly budget / days in month), and sends a Slack alert if spend is more than 15% over or under pace. Includes remaining budget for the month and projected end-of-month spend.

Nodes: Schedule Trigger → HTTP Request (Meta Marketing API, fields=spend) → HTTP Request (Google Ads API, metrics=cost_micros) → Function (calculate pacing: actual vs target, project EOM) → IF (deviation > 15%) → Slack
Trigger: Schedule, daily at 9:00 AM

Creative Fatigue Detection

~45 min setup

Fetches ad-level performance data daily, calculates 7-day rolling CTR for each creative, and flags ads where CTR has dropped more than 20% from their 7-day peak. Sends a Slack message listing fatigued creatives with their current vs. peak CTR so you can rotate them before wasting spend.

Nodes: Schedule Trigger → HTTP Request (Meta Ads insights, level=ad, date_preset=last_7d) → Function (calculate rolling CTR, compare to peak, flag >20% drops) → IF (fatigued creatives exist) → Slack (formatted list with creative name, current CTR, peak CTR)
Trigger: Schedule, daily at 10:00 AM

Weekly Performance Digest to Google Sheets

~40 min setup

Every Monday morning, aggregates the previous week's campaign data across Meta and Google Ads, calculates key metrics (spend, impressions, clicks, conversions, CPA, ROAS), appends a new row per campaign to a Google Sheet, and posts a summary to Slack with week-over-week trends.

Nodes: Schedule Trigger → HTTP Request (Meta) → HTTP Request (Google Ads) → Function (merge, calculate WoW deltas) → Google Sheets (append rows) → Slack (formatted digest with trend arrows)
Trigger: Schedule, Mondays at 8:00 AM
06

Analytics and Reporting Workflows

Automated Weekly KPI Dashboard Update

~50 min setup

Pulls data from GA4 (sessions, conversions), HubSpot (new contacts, deals created), and your ad platforms (spend, revenue). Writes everything to a structured Google Sheet that powers a Looker Studio dashboard. One workflow replaces 3 manual data pulls every Monday.

Nodes: Schedule Trigger → HTTP Request (GA4 Data API) → HubSpot (search contacts + deals) → HTTP Request (Meta + Google Ads) → Function (normalize all sources into unified schema) → Google Sheets (append to KPI sheet)
Trigger: Schedule, Mondays at 7:00 AM

Anomaly Detection Alerts

~45 min setup

Compares today's metrics against the trailing 7-day average. Alerts on spend spikes (>30% above average), conversion drops (>25% below average), and traffic anomalies. Catches issues like a broken tracking pixel or a campaign accidentally left running over the weekend.

Nodes: Schedule Trigger → HTTP Request (GA4 + ad platforms, last 8 days) → Function (calculate 7-day avg, compare today, flag deviations) → IF (any anomaly) → Slack (alert with metric name, expected range, actual value)
Trigger: Schedule, daily at 11:00 AM (allows morning data to settle)

Cross-Channel Attribution Aggregation

~60 min setup

Collects conversion data with UTM parameters from GA4, matches them against ad platform reported conversions, and writes both views to a single Google Sheet. Exposes discrepancies between platform-reported and analytics-reported conversions so you can make informed budget allocation decisions. For a real-world implementation of this approach, see the Alphorm case study.

Nodes: Schedule Trigger → HTTP Request (GA4 with UTM dimensions) → HTTP Request (Meta conversions) → HTTP Request (Google Ads conversions) → Function (merge by campaign name, calculate delta %) → Google Sheets (attribution comparison sheet)
Trigger: Schedule, weekly on Tuesdays at 9:00 AM
07

Social Monitoring Workflows

Reddit/Twitter Mention Monitoring

~30 min setup

Searches Reddit (via API) and Twitter/X (via search API or Apify actor) for mentions of your brand name, product name, or founder name. Deduplicates against a Google Sheet log of previously seen post IDs. New mentions get posted to a #brand-mentions Slack channel with the post text, author, and direct link.

Nodes: Schedule Trigger → HTTP Request (Reddit search API) → HTTP Request (Twitter/Apify) → Merge → Google Sheets (check for duplicates) → IF (new mention) → Slack + Google Sheets (log)
Trigger: Schedule, every 2 hours

Competitor Content Tracking

~35 min setup

Monitors competitor blogs via RSS feeds. When a new post is published, it fetches the full content via HTTP Request, sends it to an OpenAI node for summarization (3-bullet summary + topic classification), and posts the summary to a #competitor-intel Slack channel. Keeps your team aware of competitor positioning without anyone manually checking 10 blogs.

Nodes: RSS Feed Trigger (one per competitor blog URL) → HTTP Request (fetch full article) → OpenAI (summarize + classify topic) → Slack (post to #competitor-intel)
Trigger: RSS Feed Trigger (checks every 30 minutes)

Brand Sentiment Scoring

~50 min setup

Extends the mention monitoring workflow by passing each mention through an OpenAI node for sentiment analysis (positive, neutral, negative + confidence score). Logs results to Google Sheets with a running weekly sentiment average. Alerts Slack only on negative mentions with high confidence (>0.8) so you can respond quickly to complaints or PR issues.

Nodes: Schedule Trigger → HTTP Request (Reddit + Twitter search) → OpenAI (sentiment analysis prompt) → Google Sheets (log with sentiment + score) → IF (negative + confidence > 0.8) → Slack (urgent alert to #brand-crisis)
Trigger: Schedule, every 3 hours
Done For You

Want These Workflows Built and Deployed?

We implement, test, and optimize these n8n workflows for your specific stack. Includes credential setup, custom logic for your CRM, and 30-day monitoring.

Book Consultation
08

How to Get Started with n8n

1. Deploy n8n on a VPS

Spin up a $20/month DigitalOcean droplet (2GB RAM is enough for most teams). Install Docker, then run docker run -d --name n8n -p 5678:5678 -v n8n_data:/home/node/.n8n n8nio/n8n. Alternatively, use n8n Cloud if you want managed hosting.

2. Set Up Credentials

Go to Settings → Credentials in n8n. Add OAuth2 credentials for Google (Sheets, Analytics), API keys for OpenAI and your ad platforms, and webhook URLs for Slack. Each workflow above lists exactly which credentials it needs.

3. Start with One Workflow

Pick the workflow closest to your biggest pain point. If you are manually checking Reddit for leads, start with the Reddit Lead Mining workflow. If reporting is eating your Mondays, start with the KPI dashboard update.

4. Test with Manual Executions

Before activating any workflow, use the "Execute Workflow" button to run it manually. Check each node's output panel to verify data flows correctly. Fix any credential or mapping errors before going live.

5. Activate and Monitor

Toggle the workflow to Active. Check the Executions log daily for the first week to catch edge cases (API rate limits, empty data sets, malformed responses). Once stable, add the next workflow.

09

Common n8n Mistakes Marketers Make

Not Handling Empty API Responses

If an API returns zero results (e.g., no new Reddit posts), downstream nodes will error out. Always add an IF node after HTTP Request nodes to check if the response array length is greater than zero before processing.

Running Too Many Workflows at Once

A 2GB VPS can handle roughly 10-15 active workflows. If you stack 30+ workflows with overlapping schedules, memory spikes will crash n8n. Stagger schedule times and monitor memory usage via docker stats.

Ignoring API Rate Limits

Reddit allows 60 requests per minute. Meta's Marketing API has tier-based limits. If your workflow loops through 100 items hitting an API each time, add a Wait node (1-2 seconds) inside the loop or use batch processing to stay under limits.

Hardcoding Values Instead of Using Environment Variables

API keys, sheet IDs, and channel names should live in n8n's environment variables or credential store, not inline in Function nodes. This makes it possible to clone workflows across environments (staging vs. production) without manual edits.

No Error Handling or Alerting

Workflows fail silently by default. Use n8n's Error Trigger node to catch failures across all workflows and send a single Slack alert to a #n8n-errors channel. Include the workflow name, node that failed, and the error message in the notification.

10

Frequently Asked Questions

Is n8n free to use?

n8n is source-available under a fair-code license. Self-hosting is free for individual use and small teams. If you have more than three users or need SSO, you will need a paid license. n8n Cloud (managed hosting) starts at around $20/month and removes the need for server management.

How does n8n compare to Zapier for marketing workflows?

For simple two-step automations (form fill → Slack notification), Zapier is faster to set up. For anything involving conditional logic, data transformation, loops, or AI processing, n8n is significantly more capable and cost-effective. Growth teams typically save 60-80% on automation costs by switching from Zapier to self-hosted n8n.

Do I need coding skills to use n8n?

Most workflows can be built using n8n's visual editor without writing code. However, the Function node (JavaScript) is where n8n becomes powerful for marketers. Basic familiarity with JSON and JavaScript expressions will let you build the scoring, transformation, and conditional logic that separates basic automation from workflows that actually drive revenue.

Can I run n8n on my laptop or does it need a server?

You can run n8n locally for testing and development. For production workflows that need to run on schedules (like all 15 workflows in this guide), you need a server that stays online. A $20/month VPS from DigitalOcean, Hetzner, or Railway works well. Use Docker for easy updates and backup.

What happens if a workflow fails mid-execution?

n8n logs every execution with full input/output data for each node. Failed executions are marked in the Executions panel so you can inspect exactly where and why it broke. Set up the Error Trigger node (described in the Common Mistakes section above) to get Slack alerts whenever a workflow fails. For critical workflows, you can also enable retry-on-failure in the workflow settings.

Start Automating Your Marketing Stack