# Time to First Byte (TTFB): Reduce Latency with Better Hosting

· 4 min read
# Time to First Byte (TTFB): Reduce Latency with Better Hosting

**Primary keywords:** ttfb optimization, reduce ttfb, time to first byte hosting, ttfb improvement, server latency hosting

---

Time to First Byte (TTFB) is one of the most fundamental web performance metrics — it measures how long a browser waits after requesting a URL before receiving the first byte of the response. High TTFB means users stare at blank screens, and it feeds directly into Google's Largest Contentful Paint (LCP) metric. This article explains what drives TTFB and how to reduce it systematically.

## What TTFB Actually Measures

```
t=0ms    Browser sends HTTP request
t=25ms   DNS lookup completes (domain → IP address)
t=65ms   TCP connection established
t=138ms  TLS handshake completes (for HTTPS)
t=312ms  Server sends first byte of response  ← TTFB = 312ms
```

TTFB captures everything from the moment the browser sends the request to the moment the server starts responding.  Apex Weave It includes:

1. **DNS resolution time** — looking up the IP for your domain
2. **TCP connection time** — establishing the TCP connection
3. **TLS handshake time** — establishing the HTTPS encryption
4. **Server processing time** — your app's time to generate the response

The first three are largely network/infrastructure concerns. Server processing time is your application's concern.

## Measuring TTFB

### With curl

```bash
curl -w "DNS: %time_namelookups | TCP: %time_connects | TLS: %time_appconnects | TTFB: %time_starttransfers " \
-o /dev/null -s https://yourapp.com

# Run multiple times to get a stable average
for i in 1..5; do
curl -w "TTFB: %time_starttransfers " -o /dev/null -s https://yourapp.com
done
```

### With Node.js (Fetch API)

```javascript
async function measureTTFB(url)
const start = performance.now();

const response = await fetch(url);
const reader = response.body.getReader();

// Read just the first chunk (first byte received)
await reader.read();
reader.cancel();

const ttfb = performance.now() - start;
console.log(`TTFB: $ttfb.toFixed(0)ms`);


measureTTFB('https://yourapp.com');
```

### With Python

```python
import requests
import time

def measure_ttfb(url):
start = time.time()
with requests.get(url, stream=True) as r:
# Iterate until we get the first bytes
for chunk in r.iter_content(1):
ttfb = (time.time() - start) * 1000
print(f"TTFB: ttfb:.0fms")
break

measure_ttfb('https://yourapp.com')
```

## TTFB Benchmarks

| TTFB | Rating |
|------|--------|
| < 100ms | Excellent |
| 100–200ms | Good |
| 200–500ms | Needs improvement |
| > 500ms | Poor |

Google flags TTFB above 600ms as a problem in PageSpeed Insights.

## The Four Biggest TTFB Drivers

### 1. Geographic Distance (Physical Latency)

Every 1,000 miles of distance adds roughly 10ms of round-trip latency. A server in London serving users in Sydney adds 250ms+ before a single byte is processed.

isolated wordpress hosting **Diagnosis:**

```bash
# Measure TTFB from multiple geographic locations
# webpagetest.org allows choosing test location
# Or use a VPN to simulate different regions

curl -w "TTFB: %time_starttransfers " -o /dev/null -s https://yourapp.com
```

**Fix:** Deploy to a region close to your users. For global audiences, consider deploying multiple instances or using a CDN for static content.

### 2. Cold Starts

On platforms that spin down idle apps to save resources, the first request after idle time triggers a "cold start" — reloading the app, re-initializing database connections, re-importing modules.  https://apexweave.com/blog/cloudflare-wordpress Cold starts commonly add 2–10 seconds to TTFB.

**Diagnosis:** Time your first request after a period of inactivity vs. subsequent requests.

https://apexweave.com/blog/wordpress-speed-2025 **Fix:** Use a platform like ApexWeave that keeps your app process running continuously.

### 3.  ApexWeave cloud Slow Database Queries

For dynamic pages that query a database before responding, query time is directly added to TTFB. A query that takes 400ms means TTFB is at minimum 400ms.

**Diagnosis:**

```javascript
// Add query timing to every database call
const queryWithTiming = async (sql, params) =>
const start = Date.now();
const result = await pool.query(sql, params);
const ms = Date.now() - start;

if (ms > 50)
console.warn(`Slow query ($msms): $sql.slice(0, 100)`);


return result;
;
```

**Fix:**
- Add indexes on columns used in WHERE and JOIN clauses
- Use `EXPLAIN ANALYZE` to inspect query plans
- Cache query results in Redis

### 4. Missing Response Caching

If your server generates the same response repeatedly (same page for many users), caching the response avoids recomputation on every request.

**In-memory cache (simple):**

```javascript
const NodeCache = require('node-cache');
const cache = new NodeCache( stdTTL: 300 ); // 5 minute TTL

app.get('/api/homepage-data', async (req, res) =>
const cacheKey = 'homepage-data';
const cached = cache.get(cacheKey);

if (cached)
res.set('X-Cache', 'HIT');
return res.json(cached);


const data = await buildHomepageData(); // Slow operation
cache.set(cacheKey, data);

res.set('X-Cache', 'MISS');
res.json(data);
);
```

**Redis cache (for multi-instance apps):**

```javascript
const redis = require('./redis');

app.get('/api/homepage-data', async (req, res) =>
const cached = await redis.get('homepage-data');
if (cached) return res.json(JSON.parse(cached));

const data = await buildHomepageData();
await redis.setex('homepage-data', 300, JSON.stringify(data));
res.json(data);
);
```

**HTTP cache headers (for CDNs and browsers):**

```javascript
app.get('/api/products', async (req, res) =>
const products = await getProducts();

res.set(
'Cache-Control': 'public, max-age=60, s-maxage=300',
'ETag': generateETag(products),
'Last-Modified': new Date().toUTCString(),
);

res.json(products);
);
```

## TTFB Optimization for Different Frameworks

### Express.js

```javascript
// Enable compression to reduce response size (reduces transmission time)
const compression = require('compression');
app.use(compression());

// Disable x-powered-by header (micro-optimization)
app.disable('x-powered-by');

// Keep-alive connections reduce TLS overhead on subsequent requests
const server = app.listen(PORT);
server.keepAliveTimeout = 65000;
server.headersTimeout = 66000;
```

### Next.js

```javascript
// next.config.js — enable compression
module.exports =
compress: true,

// Static page generation where possible
// ISR for dynamic pages with caching
;
```

For SSR pages, use `unstable_cache` to cache data fetching:

```javascript
import  unstable_cache  from 'next/cache';

const getCachedProducts = unstable_cache(
async () => fetchProductsFromDB(),
['products'],
revalidate: 300  // Revalidate every 5 minutes
);
```

### Django

```python
# Use Django's per-view caching
from django.views.decorators.cache import cache_page

@cache_page(60 * 5)  # Cache for 5 minutes
def product_list(request):
products = Product.objects.filter(active=True)
return JsonResponse('products': list(products.values()))
```

## Tracking TTFB Progress

Set up ongoing TTFB monitoring:

```javascript
// Record TTFB in your app's response headers for debugging
app.use((req, res, next) =>
const start = Date.now();
res.on('finish', () =>
const ttfb = Date.now() - start;
if (ttfb > 500)
console.warn(`High TTFB: $ttfbms for $req.method $req.path`);

);
next();
);
```



Tools for ongoing monitoring:
- [Checkly](https://checklyhq.com) — synthetic monitoring with TTFB alerts
- [Better Uptime](https://betteruptime.com) — uptime + response time tracking
- [New Relic / Datadog](https://newrelic.com) — full APM with TTFB tracking

With good hosting infrastructure and response caching, TTFB under 100ms is achievable for most applications. Start with the free 7-day trial at [apexweave.io](https://apexweave.io) — managed hosting with no cold starts and fast regional servers. wordpress agency hosting