Mastering Long-Running Tasks in ASP.NET Core Without Blocking Requests

 

Long-running tasks in ASP.NET Core like generating reports, processing files, or calling slow third-party services shouldn’t run inside controller actions.
Keeping an HTTP request open while you do heavy work causes real problems:

  • Timeouts (clients, reverse proxies, load balancers)
  • Lower throughput (requests occupy resources longer)
  • Unreliable execution (deployments/restarts kill in-flight work)
  • Retry amplification (timeouts trigger retries → more load → more timeouts)

A better pattern is simple:

  1. Accept the request quickly
  2. Enqueue the work
  3. Return 202 Accepted with a job ID
  4. Process the work in the background
  5. Let the client check status (or receive a webhook callback)

What counts as a “long-running task” in ASP.NET Core

A “long-running task” is any work that’s long enough to risk timeouts, tie up resources, or get interrupted by restarts.
Common examples:

  • Bulk email sending
  • PDF/Excel report generation
  • File post-processing (virus scan, thumbnails, transcodes)
  • Video/image processing
  • Slow third-party API calls (payments, CRM sync, KYC)
  • Webhook retries
  • Scheduled callbacks

A practical rule: if the work isn’t required to build the immediate HTTP response, it probably shouldn’t run in the controller.

How does the “Enqueue + 202 Accepted” pattern work

Here’s the backbone pattern you can reuse with BackgroundService, Hangfire, or a message broker.

Api controller diagram

Bad: Controller → Long work → Client waits 
Good: Controller → Queue → Worker → Client moves on 

“Fire-and-forget” in ASP.NET Core: When it’s OK (and when it’s not) 

Best-effort fire-and-forget is only acceptable when you can tolerate lost work. 
Reason: if the process crashes or restarts, in-memory work disappears. Also, exceptions can be missed if you don’t observe them. 

OK for (best-effort): 

  • Non-critical telemetry enrichment 
  • Optional cache warmups 
  • Low-risk tasks you’re fine skipping 

Not OK for: 

  • Anything that must happen (billing, compliance workflows) 
  • Emails users expect 
  • Reports users rely on 
  • Data sync that must be consistent 

If you need reliability, use a queue or job system. 

Choosing the right approach: Quick decision guide 

Pick the simplest option that meets your durability and scale needs. 

Use case Recommended approach 
Best-effort background work (small, non-critical) BackgroundService 
Background work with backpressure inside the API BackgroundService + in-memory queue (Channel) 
CPU-heavy work Separate worker service/process 
Jobs must survive restarts Durable queue (RabbitMQ/SQS/Service Bus) + worker 
Scheduling/recurring jobs Hangfire or Quartz.NET 
Need dashboard + retries + persistence quickly Hangfire 

Hosted services basics: IHostedService vs BackgroundService 

ASP.NET Core “hosted services” are the built-in way to run background logic alongside your app. 

  • IHostedService is the low-level interface (StartAsync / StopAsync). 
  • BackgroundService is a helper base class that gives you ExecuteAsync(CancellationToken) for long-running loops and handles start/stop glue for you. 

If you’re building a worker loop, BackgroundService is usually the easiest starting point. 

A safe BackgroundService pattern (bounded work + cancellation) 

Here’s a basic worker that: 

  • Runs a loop 
  • Honors shutdown via CancellationToken 
  • Avoids a “tight failure loop” with a small delay
.NET
using Microsoft.Extensions.Hosting; 
using Microsoft.Extensions.Logging; 
public sealed class ExampleWorker : BackgroundService 
{ 
    private readonly ILogger _logger;
    public ExampleWorker(ILogger logger) => _logger = logger; 
    protected override async Task ExecuteAsync(CancellationToken stoppingToken) 
    { 
        _logger.LogInformation("Worker started"); 
        while (!stoppingToken.IsCancellationRequested) 
        { 
            try 
            { 
                // Do one bounded unit of work per loop iteration 
                await Task.Delay(TimeSpan.FromSeconds(5), stoppingToken); 
                _logger.LogInformation("Worker heartbeat at {Time}", DateTimeOffset.UtcNow); 
            } 
            catch (OperationCanceledException) when (stoppingToken.IsCancellationRequested) 
            {
                // graceful shutdown 
            } 
            catch (Exception ex) 
            { 
                _logger.LogError(ex, "Unhandled exception in worker loop"); 
                await Task.Delay(TimeSpan.FromSeconds(2), stoppingToken); 
            } 
        } 
        _logger.LogInformation("Worker stopping"); 
    } 
} 
Register it: 
builder.Services.AddHostedService();     

If your goal is “don’t block HTTP requests,” the cleanest approach is producer–consumer:

  • Controllers produce work by enqueuing jobs
  • A worker consumes jobs and executes them

For an in-process queue, System.Threading.Channels is a great fit: it’s async-friendly, fast, and supports bounded capacity (backpressure).

Step 1: Define a job contract 

Keep the message small. Pass IDs, not big payloads (store large data in DB/blob storage).

.NET
public sealed record WorkItem( 
    string JobId, 
    Func Handler 
);     

Step 2: Create a bounded Channel-based queue 

Bounded capacity prevents your API from “accepting infinite work” during spikes. 

.NET
using System.Threading.Channels; 
public interface IBackgroundTaskQueue 
{ 
    ValueTask EnqueueAsync(WorkItem item, CancellationToken ct = default); 
    ValueTask DequeueAsync(CancellationToken ct); 
} 
public sealed class ChannelBackgroundTaskQueue : IBackgroundTaskQueue 
{ 
    private readonly Channel _channel; 
    public ChannelBackgroundTaskQueue(int capacity = 1000) 
    { 
        _channel = Channel.CreateBounded( 
            new BoundedChannelOptions(capacity) 
            { 
                FullMode = BoundedChannelFullMode.Wait 
            }); 
    }  
    public ValueTask EnqueueAsync(WorkItem item, CancellationToken ct = default) 
        => _channel.Writer.WriteAsync(item, ct); 
    public ValueTask DequeueAsync(CancellationToken ct) 
        => _channel.Reader.ReadAsync(ct); 
}   

Step 3: Build a worker that consumes the queue 

This worker stops cleanly during shutdown and logs job outcomes. 

.NET
using System.Threading.Channels; 
public interface IBackgroundTaskQueue 
{ 
    ValueTask EnqueueAsync(WorkItem item, CancellationToken ct = default); 
    ValueTask DequeueAsync(CancellationToken ct); 
} 
public sealed class ChannelBackgroundTaskQueue : IBackgroundTaskQueue 
{ 
    private readonly Channel _channel; 
    public ChannelBackgroundTaskQueue(int capacity = 1000) 
    { 
        _channel = Channel.CreateBounded( 
            new BoundedChannelOptions(capacity) 
            { 
                FullMode = BoundedChannelFullMode.Wait 
            }); 
    } 
    public ValueTask EnqueueAsync(WorkItem item, CancellationToken ct = default) 
        => _channel.Writer.WriteAsync(item, ct); 
    public ValueTask DequeueAsync(CancellationToken ct) 
        => _channel.Reader.ReadAsync(ct); 
}      

Step 4: Enqueue from a controller and return 202 Accepted 

The controller stays fast. The background worker does the heavy lifting. 

.NET
using Microsoft.AspNetCore.Mvc; 
[ApiController] 
[Route("api/reports")] 
public sealed class ReportsController : ControllerBase 
{ 
    private readonly IBackgroundTaskQueue _queue; 
    public ReportsController(IBackgroundTaskQueue queue) => _queue = queue; 
    [HttpPost("{reportId}:generate")] 
    public async Task Generate(string reportId, CancellationToken ct) 
    { 
        var jobId = Guid.NewGuid().ToString("n"); 
        await _queue.EnqueueAsync( 
            new WorkItem(jobId, async token => 
            { 
                // Example: generate report, store it, update job status in DB 
                await Task.Delay(TimeSpan.FromSeconds(10), token); 
            }), 
            ct); 
        return Accepted(new { jobId, statusUrl = $"/api/jobs/{jobId}" }); 
    } 
}   

Step 5: Wire everything up in DI 

.NET
builder.Services.AddSingleton(_ => 
    new ChannelBackgroundTaskQueue(capacity: 500)); 
builder.Services.AddHostedService();     

Job status: Make 202 accepted actually useful 

Returning 202 is only half the story. Clients need a way to check progress. 

A practical contract: 

  • POST /reports/{id}:generate → 202 Accepted with { jobId, statusUrl } 
  • GET /jobs/{jobId} → queued | running | completed | failed (+ output link when ready) 
  • Optional: webhook callback for async clients 

Minimal job status model (example) 

.NET
public enum JobState { Queued, Running, Completed, Failed } 
public sealed record JobStatus( 
    string JobId, 
    JobState State, 
    string? ResultUrl = null, 
    string? Error = null 
);    

When in-memory queues aren’t enough: Durability across restarts 

An in-memory queue is great for getting started, but it has a hard limit: 

  • If the process dies, queued work is gone 
  • If you scale down, in-flight work can be lost 
  • If you deploy, work might be interrupted 

When the work must survive restarts, move the job storage outside the web process:

  • Message brokers: Azure Service Bus, RabbitMQ, AWS SQS 
  • Job systems: Hangfire (persistent jobs + retries + dashboard) 
  • Separate worker services: clean separation, independent scaling 

How do you consume jobs using Azure Service Bus in ASP.NET Core

This shows the general “durable queue + worker” pattern. Production code should also handle idempotency, poison messages, and backoff. 

.NET
using Azure.Messaging.ServiceBus; 
using Microsoft.Extensions.Hosting; 
using Microsoft.Extensions.Logging; 
public sealed class ServiceBusWorker : BackgroundService 
{ 
    private readonly ServiceBusProcessor _processor; 
    private readonly ILogger _logger; 
    public ServiceBusWorker(ServiceBusClient client, ILogger logger) 
    { 
        _logger = logger; 
        _processor = client.CreateProcessor("report-jobs", new ServiceBusProcessorOptions 
        { 
            MaxConcurrentCalls = 4, 
           AutoCompleteMessages = false 
        }); 
        _processor.ProcessMessageAsync += OnMessageAsync; 
        _processor.ProcessErrorAsync += OnErrorAsync; 
    }  
    protected override async Task ExecuteAsync(CancellationToken stoppingToken) 
    { 
        await _processor.StartProcessingAsync(stoppingToken); 
        await Task.Delay(Timeout.InfiniteTimeSpan, stoppingToken); 
    } 
    public override async Task StopAsync(CancellationToken cancellationToken) 
    { 
        await _processor.StopProcessingAsync(cancellationToken); 
        await _processor.DisposeAsync(); 
        await base.StopAsync(cancellationToken); 
    } 
    private async Task OnMessageAsync(ProcessMessageEventArgs args) 
    { 
         var body = args.Message.Body.ToString(); 
        _logger.LogInformation("Received job: {Body}", body); 
        try 
        { 
            // Do work (make it idempotent) 
            await Task.Delay(TimeSpan.FromSeconds(5), args.CancellationToken);
            await args.CompleteMessageAsync(args.Message, args.CancellationToken); 
        } 
        catch (Exception ex) 
        { 
            _logger.LogError(ex, "Job failed; abandoning message"); 
            await args.AbandonMessageAsync(args.Message, cancellationToken: args.CancellationToken); 
        } 
    } 
    private Task OnErrorAsync(ProcessErrorEventArgs args) 
    { 
        _logger.LogError(args.Exception, "Service Bus error: {Entity}", args.EntityPath); 
        return Task.CompletedTask; 
    } 
} 

Hangfirevs BackgroundService: When to pick Hangfire

Use Hangfire when you want: 

  • Durable, persisted jobs 
  • Automatic retries 
  • Delayed and recurring jobs 
  • A dashboard for job visibility 
  • Less custom plumbing 

Example: Enqueue a Hangfire job from an API 

.NET
using Hangfire; 
using Microsoft.AspNetCore.Mvc; 
[ApiController] 
[Route("api/emails")] 
public sealed class EmailsController : ControllerBase 
{ 
    [HttpPost("send")] 
    public IActionResult SendEmail([FromBody] SendEmailRequest request) 
    { 
        var jobId = BackgroundJob.Enqueue(() => 
            EmailJobs.SendAsync(request.To, request.Subject, request.Body)) 
        return Accepted(new { jobId }); 
    }
} 
public sealed record SendEmailRequest(string To, string Subject, string Body); 
public static class EmailJobs 
{ 
    public static Task SendAsync(string to, string subject, string body) 
    { 
        // real email sender here 
        return Task.CompletedTask; 
    } 
}      

When Quartz.NET makes more sense 

Quartz.NET shines when your main need is scheduling, not processing a backlog. 

Use Quartz when: 

  • You need cron-style schedules 
  • Jobs are time-based and predictable 
  • You don’t need queue semantics as the core model 

If you need both scheduling and durable background jobs with retries and visibility, many teams choose Hangfire. 

Production essentials for background work 

Background processing isn’t just “run code later.” In production, these concerns matter most: 

1) Graceful shutdown 

  • Always honor CancellationToken 
  • Stop accepting new work during shutdown 
  • Make sure workers exit cleanly 

2) Idempotency (so retries don’t double-do work) 

Retries happen. Design for it: 

  • Use a jobId / requestId as an idempotency key 
  • Store state transitions (Queued → Running → Completed/Failed) 
  • Make handlers safe to run twice (or detect duplicates) 

3) Backpressure and load control 

  • Prefer bounded queues 
  • Limit concurrency based on CPU/IO 
  • Add backoff to avoid rapid retry storms 

4) Observability (you can’t operate what you can’t see) 

At minimum, track: 

  • Enqueue rate 
  • Queue depth 
  • Job duration (queue time + processing time) 
  • Success/failure counts 
  • Retry and dead-letter counts (for brokers) 

Common mistakes to avoid 

  1. Using Task.Run() in controllers for important work (work can be lost, failures can be missed) 
  2. Blocking async code with .Result / .Wait() (thread pool starvation under load) 
  3. Unbounded queues (memory pressure and cascading failures) 
  4. No idempotency (retries cause duplicates) 
  5. No status endpoint (clients retry blindly)
  6. CPU-heavy jobs inside the web process (hurts p99 latency for unrelated requests) 

Summary: The safest path for long-running tasks in ASP.NET Core 

If you remember one thing: keep requests short, and make background work durable when it matters. 

  • Return quickly (often 202 Accepted) instead of holding connections open. 
  • Start with BackgroundService + Channel for simple in-process work. 
  • Move to a durable queue/job system when jobs can’t be lost. 
  • Design for idempotency, retries, shutdown safety, and visibility. 

Practical checklist 

Use this before shipping background processing to production: 

  • Controller returns quickly (no long work in request pipeline) 
  • Work is enqueued and processed by a background worker 
  • Queue is bounded (backpressure) 
  • Every job has a jobId (idempotency key) 
  • Job status is stored and queryable (/jobs/{jobId}) 
  • Worker honors CancellationToken and stops cleanly 
  • Retries have caps and backoff (and dead-letter/poison handling if durable) 
  • Side effects are idempotent (no double emails/charges) 
  • Metrics/logging exist for queue depth, latency, duration, failures 
  • CPU-heavy work runs outside the web process (if needed) 

Start today and unlock all features of BoldSign.

Need assistance? Request a demo or visit our Support Portal for quick help.

Build non‑blocking, high‑performance workloads and integrate smooth, secure eSignature workflows with BoldSign’s API

Comments

Popular posts from this blog

Introducing the BoldSign PHP SDK for Effortless E-Signature Integration

Get eSign via Text Message Quickly with BoldSign Today

How to Embed an eSignature Solution into Your Python Application