Lambdas / Netlify
⭐ Recommended: Use the Reforge CLI to generate TypeScript definitions for type-safe access to your flags and configs:
npx @reforge-com/cli generate --targets node-ts
Choosing an Approach
The first step is to choose between a client-side style or server-side style approach. We've written a blog post that goes into detail about choosing how to use Reforge with Netlify.
Feature Flags in Lambdas: The Browser-Like Approach
A practical solution is to treat Netlify functions similar to a browser. Reforge's Javascript client, for instance, caches flag evaluations per user in a CDN. Here's a sample code snippet for this approach:
- TypeScript (Recommended)
- JavaScript
import { reforge, Context } from "@reforge-com/javascript";
export default async (req: Request, context: any) => {
const clientOptions = {
sdkKey: process.env.REFORGE_FRONTEND_SDK_KEY!, // client SDK key
context: new Context({ user: { key: "1234" } }), // user context
};
await reforge.init(clientOptions); // initialize with context
if (reforge.isEnabled("my-flag")) { // check feature flag
// Your code here
}
return new Response("ok");
};
import { reforge, Context } from "@reforge-com/javascript";
export default async (req, context) => {
const clientOptions = {
sdkKey: process.env.REFORGE_FRONTEND_SDK_KEY,
context: new Context({ user: { key: "1234" } }),
};
await reforge.init(clientOptions);
if (reforge.isEnabled("my-flag")) {
// Your code here
}
return new Response("ok");
};
In our testing from a Netlify function we see results around a 50ms latency initially and around then 10ms for each subsequent request for the same context. That may be too slow for some applications, but it's a good starting point and very easy to set up.
The nice thing about this solution is that you're going to get instant updates when you change a flag. The next request will have up to date data.
The Server-Side Alternative
Alternatively, you can implement a server-side strategy using the Reforge NodeJS client. The key will be configuring our client to disable background updates and background telemetry, then performing an update on our own timeline.
Here's a sample code snippet for this approach:
- ⭐ TypeScript + Generated Types (Recommended)
- TypeScript
- JavaScript
First, generate your types:
npx @reforge-com/cli generate --targets node-ts
Then set up your Lambda with full type safety:
import { Reforge, type Contexts } from "@reforge-com/node";
import { ReforgeTypesafeNode } from "./generated/reforge-server";
const baseReforge = new Reforge({
sdkKey: process.env.REFORGE_BACKEND_SDK_KEY!,
enableSSE: false, // we don't want any background process in our function
enablePolling: false, // we'll handle updates ourselves
collectLoggerCounts: false, // turn off background telemetry
contextUploadMode: "none", // turn off background telemetry
collectEvaluationSummaries: false, // turn off background telemetry
});
// initialize once on cold start
await baseReforge.init();
// Create typed instance
const reforge = new ReforgeTypesafeNode(baseReforge);
export default async (req: Request, context: any) => {
const { userId } = context.params;
const reforgeContext: Contexts = { user: { key: userId } };
// Use type-safe methods with context
if (reforge.myFlag(reforgeContext)) {
// Your code here with full type safety
}
const userConfig = reforge.userSpecificConfig(reforgeContext);
// every 60 seconds, check for updates in-process
baseReforge.updateIfStalerThan(60 * 1000);
return new Response("ok");
};
export const config = { path: "/users/:userId" };
import { Reforge, type Contexts } from "@reforge-com/node";
const reforge = new Reforge({
sdkKey: process.env.REFORGE_BACKEND_SDK_KEY!, // server SDK key
enableSSE: false, // we don't want any background process in our function
enablePolling: false, // we'll handle updates ourselves
collectLoggerCounts: false, // turn off background telemetry
contextUploadMode: "none", // turn off background telemetry
collectEvaluationSummaries: false, // turn off background telemetry
});
// initialize once on cold start
await reforge.init(); // load configuration
export default async (req: Request, context: any) => {
const { userId } = context.params; // extract user ID from URL
const reforgeContext: Contexts = { user: { key: userId } }; // create user context
return reforge.inContext(reforgeContext, (rf) => {
if (rf.isFeatureEnabled("my-flag")) { // context-aware feature flag
// Your code here
}
// every 60 seconds, check for updates in-process
reforge.updateIfStalerThan(60 * 1000); // conditional update
return new Response("ok");
});
};
export const config = { path: "/users/:userId" }; // URL pattern
import { Reforge } from "@reforge-com/node";
const reforge = new Reforge({
sdkKey: process.env.REFORGE_BACKEND_SDK_KEY,
enableSSE: false, // we don't want any background process in our function
enablePolling: false, // we'll handle updates ourselves
collectLoggerCounts: false, // turn off background telemetry
contextUploadMode: "none", // turn off background telemetry
collectEvaluationSummaries: false, // turn off background telemetry
});
// initialize once on cold start
await reforge.init();
export default async (req, context) => {
const { userId } = context.params;
const reforgeContext = { user: { key: userId } };
return reforge.inContext(reforgeContext, (rf) => {
if (rf.isFeatureEnabled("my-flag")) {
// Your code here
}
// every 60 seconds, check for updates in-process
reforge.updateIfStalerThan(60 * 1000);
return new Response("ok");
});
};
export const config = { path: "/users/:userId" };
With this approach, most of our requests will be fast, but we'll have a periodic update that will take a bit longer. This is about 50ms in my testing from a Netlify function. We're entirely in control of the frequency here, so it's a judgment call on how real-time you want your feature flag updates. You could even disable the updates altogether if tail latency is of utmost concern and you didn't mind redeploying to update your flags.