Google Cloud Endpoints makes CORS feel simpler than it really is. That’s both the nice part and the dangerous part.
If you’re running Endpoints with ESP or ESPv2, you’ve got a few ways to handle CORS:
- let Endpoints proxy and pass CORS through from your backend
- make Endpoints handle CORS preflight for you
- split responsibility between proxy and backend
All three work. Not all three age well.
I’ve seen teams “fix CORS” by slapping Access-Control-Allow-Origin: * onto everything, then later wonder why authenticated browser requests still fail. CORS is one of those areas where the browser is very literal, and Google Cloud Endpoints doesn’t save you from bad policy choices.
The short version
If you want my opinion:
- Best for most teams: handle CORS explicitly in your backend and let Endpoints pass headers through
- Best for simple public APIs: use Endpoints CORS support if you want quick preflight handling
- Worst long-term choice: split logic across proxy and app unless you have a very clear reason
What Google Cloud Endpoints is actually doing
Endpoints sits in front of your service, usually with ESPv2, and validates/authenticates/routes requests before they hit your backend.
For browser requests, CORS usually means handling:
OriginAccess-Control-Request-MethodAccess-Control-Request-Headers
And responding with:
Access-Control-Allow-OriginAccess-Control-Allow-MethodsAccess-Control-Allow-HeadersAccess-Control-Allow-CredentialsAccess-Control-Max-Age- sometimes
Access-Control-Expose-Headers
The browser sends a preflight OPTIONS request when the real request is “non-simple.” That includes things like:
AuthorizationheaderContent-Type: application/jsonin many cases- custom headers
- methods like
PUT,PATCH, orDELETE
If preflight fails, the real request never happens.
Option 1: Handle CORS in your backend
This is the one I trust most.
Your app returns the CORS headers, and Endpoints just forwards them.
Pros
- Single source of truth
- Easy to test locally
- Works well when CORS depends on route, tenant, or auth state
- Less proxy magic
Cons
- Your backend must correctly handle
OPTIONS - Different services may implement CORS differently if you don’t standardize it
- You need to be careful not to emit conflicting headers at the proxy layer
Example: Express backend behind Endpoints
const express = require('express');
const app = express();
const allowedOrigins = new Set([
'https://app.example.com',
'https://admin.example.com'
]);
app.use((req, res, next) => {
const origin = req.headers.origin;
if (origin && allowedOrigins.has(origin)) {
res.setHeader('Access-Control-Allow-Origin', origin);
res.setHeader('Vary', 'Origin');
res.setHeader('Access-Control-Allow-Credentials', 'true');
res.setHeader('Access-Control-Expose-Headers', 'ETag, Link, Location, Retry-After');
}
if (req.method === 'OPTIONS') {
res.setHeader('Access-Control-Allow-Methods', 'GET, POST, PUT, PATCH, DELETE, OPTIONS');
res.setHeader(
'Access-Control-Allow-Headers',
'Authorization, Content-Type, X-Requested-With'
);
res.setHeader('Access-Control-Max-Age', '3600');
return res.status(204).end();
}
next();
});
app.get('/profile', (req, res) => {
res.json({ ok: true });
});
app.listen(8080);
That Vary: Origin header matters. Without it, caches can serve the wrong origin’s response to someone else.
When this approach is best
Use backend handling when:
- your API is authenticated
- you need cookies or credentials
- you have multiple frontend origins
- you want precise control over exposed headers
A lot of real APIs expose more than the basics. GitHub, for example, returns:
access-control-allow-origin: *
access-control-expose-headers: ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Used, X-RateLimit-Resource, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, X-GitHub-SSO, X-GitHub-Request-Id, Deprecation, Sunset
That’s a good reminder that browser clients often need access to operational headers, not just the JSON body.
Option 2: Let Google Cloud Endpoints handle CORS
Endpoints can answer CORS preflight requests directly, which can reduce load on your backend and simplify setup.
This is typically enabled in your OpenAPI config with CORS support and ESPv2 runtime flags, depending on your deployment style.
Pros
- Fast to enable
- Preflight can be answered at the proxy
- Less application code
- Nice for public or low-complexity APIs
Cons
- Less flexible
- Easy to outgrow
- Can become confusing when backend also sets CORS
- Not great when policy differs by route or tenant
OpenAPI example
swagger: "2.0"
info:
title: example-api
version: 1.0.0
host: my-api.endpoints.my-project.cloud.goog
schemes:
- https
produces:
- application/json
paths:
/hello:
get:
operationId: hello
responses:
200:
description: OK
schema:
type: string
x-google-endpoints:
- name: my-api.endpoints.my-project.cloud.goog
allowCors: true
That allowCors: true tells Endpoints not to reject CORS-related traffic and allows the proxy to participate in handling it.
You’ll still want to verify exactly which headers are returned in your environment. I don’t trust CORS until I’ve tested it with a real browser and a raw preflight request.
Example preflight test
curl -i -X OPTIONS "https://my-api.endpoints.my-project.cloud.goog/hello" \
-H "Origin: https://app.example.com" \
-H "Access-Control-Request-Method: GET" \
-H "Access-Control-Request-Headers: Authorization, Content-Type"
If the response headers don’t match what your frontend actually sends, the browser will block the call no matter how confident your infrastructure config looks.
When this approach is best
Use Endpoints-managed CORS when:
- your API is mostly public
- your allowed origins are simple
- you don’t need credentialed browser requests
- you want to offload preflight traffic
If your browser app sends bearer tokens in the Authorization header, this can still work, but the policy usually becomes less “simple” very quickly.
Option 3: Split CORS between Endpoints and backend
This is the “it works on staging” setup.
The proxy handles preflight, but your backend adds Access-Control-Allow-Origin or Access-Control-Expose-Headers on actual responses.
Sometimes teams end up here by accident. One engineer enables allowCors, another adds middleware in the app, and now nobody is sure which layer owns policy.
Pros
- Can be useful when you need proxy-side preflight optimization
- Lets backend control response-specific headers like
Access-Control-Expose-Headers - May help during migration
Cons
- Harder to reason about
- Easy to create inconsistent policy
- Debugging gets annoying fast
- Duplicate headers are common
- Preflight and actual response can disagree
That last one is brutal. The browser may approve the preflight, then reject the actual response because the final Access-Control-Allow-Origin is missing or wrong.
Example of a bad split
-
Endpoints preflight says:
Access-Control-Allow-Origin: https://app.example.comAccess-Control-Allow-Headers: Authorization
-
Backend actual response says:
Access-Control-Allow-Origin: *
That fails if credentials are involved, and even when it doesn’t fail, it’s sloppy policy.
Pros and cons side by side
Backend-managed CORS
Pros
- Full control
- Easier route-specific policy
- Better for auth and credentials
- Cleaner long-term ownership
Cons
- More app code
- Every service must implement it correctly
- You handle preflight load yourself
Endpoints-managed CORS
Pros
- Quick setup
- Proxy can absorb preflight traffic
- Good for simple public APIs
Cons
- Less granular
- Harder to express complex origin logic
- Can hide problems until frontend integration
Mixed model
Pros
- Flexible in theory
- Sometimes useful during migration
Cons
- Operationally messy
- Easy to break
- Usually not worth it
My recommendation
For internal dashboards, SPAs, and authenticated browser apps, I’d keep CORS in the backend and make Endpoints stay out of the way as much as possible.
For public APIs with predictable usage, Endpoints-managed CORS is fine if you verify the exact behavior in production.
I would avoid the mixed model unless I had a strong requirement like:
- centralized preflight handling for performance
- legacy services with inconsistent app frameworks
- a temporary migration plan with a clear end date
If there’s no clear owner for CORS, bugs stick around forever.
Common mistakes with Google Cloud Endpoints CORS
Using * with credentials
This is still the classic mistake.
If you return:
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
the browser will reject it. You must echo a specific allowed origin when credentials are enabled.
Forgetting Vary: Origin
If your response changes per origin, caches need to know that.
Vary: Origin
Without it, shared caches can leak the wrong CORS response.
Ignoring Access-Control-Expose-Headers
Your frontend can’t read arbitrary response headers unless you expose them.
If your JS needs rate limit or pagination metadata, expose it explicitly:
Access-Control-Expose-Headers: ETag, Link, Location, Retry-After, X-RateLimit-Limit, X-RateLimit-Remaining
Treating preflight as optional
If the browser sends preflight, you must answer it correctly. Redirecting OPTIONS, authenticating it incorrectly, or returning a generic 404 will break the real request.
A practical rule
Pick one layer to own CORS.
That’s the real comparison here. Google Cloud Endpoints gives you multiple ways to get a green checkmark, but only one way to keep the setup understandable six months later: one owner, one policy, tested with real browser requests.
If you’re already tightening API security, CORS should sit alongside the rest of your response-header review. If you’re also working on headers like CSP or related browser controls for web apps, see the official docs for your stack, and for broader header strategy there’s also https://csp-guide.com.
For Google Cloud Endpoints docs, stick with the official documentation and test every change with a real OPTIONS request before you call it done.