Selfie Module
The Selfie module captures a user's face with ML-powered liveness detection to prevent spoofing.
This guide is specific to Web SDK 2.0. If you are still using 1.x, you can find documentation here. We strongly recommend upgrading - contact your Incode Representative for upgrade information.
The Selfie module captures a user's face with ML-powered liveness detection to prevent spoofing.
Follows the camera-capture pattern. See that page for the shared manager lifecycle, capture sub-states, and skeleton; the rest of this page covers Selfie-specific config, detection statuses, and methods.
Tag
<incode-selfie> is a standard Web Component. Importing the UI subpath registers the custom element; importing the CSS applies the module's styles.
import '@incodetech/web/selfie';
import '@incodetech/web/selfie/styles.css';Properties
Set these as JavaScript properties on the element (not as HTML attributes):
| Property | Type | Required | Description |
|---|---|---|---|
config | SelfieConfig | ❌ | Configuration options (validation flags, modes) |
onFinish | () => void | ❌ | Called when capture completes successfully |
onError | (error: string) => void | ❌ | Called when an error occurs |
WASM Requirements
The Selfie module uses WebAssembly for face detection and liveness analysis. Pre-warm WASM during setup() so models are ready before the user reaches the camera step:
await setup({
apiURL: 'https://demo-api.incodesmile.com',
token: 'your-session-token',
wasm: { pipelines: ['selfie'] },
});See WASM Configuration for self-hosted paths and the lower-level warmupWasm() API.
Usage
Vanilla HTML / TypeScript
<incode-selfie></incode-selfie>
<script type="module">
import { setup } from '@incodetech/core';
import '@incodetech/web/selfie';
import '@incodetech/web/selfie/styles.css';
await setup({
apiURL: 'https://demo-api.incodesmile.com',
token: 'your-session-token',
wasm: { pipelines: ['selfie'] },
});
const selfie = document.querySelector('incode-selfie');
selfie.onFinish = () => console.log('Selfie captured!');
selfie.onError = (err) => console.error('Selfie error:', err);
</script>React
React 18 or earlier: add the one-time JSX augmentation from Framework Integration → TypeScript: JSX support for
incode-*tags. React 19+ doesn't need it, and can also use the simpler form from Framework Integration → React 19+ shortcut.
import { useEffect, useRef } from 'react';
import { setup } from '@incodetech/core';
import type { SelfieConfig } from '@incodetech/core/selfie';
import '@incodetech/web/selfie';
import '@incodetech/web/selfie/styles.css';
type SelfieElement = HTMLElement & {
config?: SelfieConfig;
onFinish: () => void;
onError: (error: string) => void;
};
await setup({
apiURL: 'https://demo-api.incodesmile.com',
token: 'your-session-token',
wasm: { pipelines: ['selfie'] },
});
export function SelfieCapture() {
const ref = useRef<SelfieElement>(null);
useEffect(() => {
const el = ref.current;
if (!el) return;
el.onFinish = () => console.log('Selfie captured!');
el.onError = (err) => console.error('Selfie error:', err);
}, []);
return <incode-selfie ref={ref} />;
}For Angular (CUSTOM_ELEMENTS_SCHEMA) and Vue (compilerOptions.isCustomElement) setup, see Framework Integration.
Headless Mode
For complete UI control, use the createSelfieManager from @incodetech/core/selfie.
Quick Start
import { setup } from '@incodetech/core';
import { createSelfieManager } from '@incodetech/core/selfie';
import { warmupWasm } from '@incodetech/core/wasm';
await setup({
apiURL: 'https://demo-api.incodesmile.com',
token: 'your-session-token',
});
await warmupWasm({
wasmPath: '/wasm/webLib.wasm',
glueCodePath: '/wasm/webLib.js',
modelsBasePath: '/wasm/models',
pipelines: ['selfie'],
});
const manager = createSelfieManager({
config: {
showTutorial: true,
autoCaptureTimeout: 10,
validateLenses: true,
validateFaceMask: true,
},
});
manager.subscribe((state) => {
console.log('Status:', state.status);
if (state.status === 'capture') {
console.log('Detection:', state.detectionStatus);
console.log('Stream ready:', !!state.stream);
}
if (state.status === 'finished') {
console.log('Selfie captured!', state.processResponse);
manager.stop();
}
});
manager.load();State Machine Flow
flowchart LR
idle -->|load| tutorial
tutorial -->|nextStep| permissions
permissions -->|granted| capture
capture -->|upload done| processing
processing -->|success| finished
States Reference
| Status | Description | Key Properties |
|---|---|---|
idle | Initial state, waiting for load() | – |
loading | Checking permissions (when no tutorial) | – |
tutorial | Showing tutorial | ageAssurance (boolean; mirrors config.ageAssurance and selects the age-assurance copy variant) |
permissions | Camera permission handling | permissionStatus |
capture | Camera active, detecting face | stream, captureStatus, detectionStatus, attemptsRemaining |
processing | Server-side processing of the captured selfie | – |
finished | Capture complete | processResponse? |
closed | User closed the flow | – |
error | Fatal error occurred | error |
Capture State Properties
When status === 'capture':
| Property | Type | Description |
|---|---|---|
stream | CameraStream | Camera stream for <video> element |
captureStatus | string | 'initializing', 'detecting', 'capturing', 'uploading', 'uploadError', 'success' |
detectionStatus | DetectionStatus | Face detection feedback (see below) |
attemptsRemaining | number | Remaining capture attempts |
uploadError | string? | Error message if upload failed |
assistedOnboarding | boolean | Whether assisted onboarding mode is active |
debugFrame | ImageData? | Latest processed frame (for debugging) |
Note:
processingis a separate top-level state (status === 'processing'), not acaptureStatusvalue.
Detection Status Values
| Status | User Instruction |
|---|---|
idle | "Preparing camera..." |
detecting | "Detecting face..." |
noFace | "Position your face in the frame" |
tooManyFaces | "Only one face should be visible" |
tooClose | "Move back" |
tooFar | "Move closer" |
blur | "Hold still, image is blurry" |
dark | "Improve lighting conditions" |
faceAngle | "Face your camera directly" |
headWear | "Remove head coverings" |
lenses | "Remove glasses or lenses" |
eyesClosed | "Open your eyes" |
faceMask | "Remove face mask" |
centerFace | "Center your face" |
getReady | "Get ready..." |
getReadyFinished | "Hold still..." |
capturing | "Capturing photo..." |
manualCapture | "Tap to capture" |
offline | "No network connection" |
Permission Status Values
When status === 'permissions':
permissionStatus | Description |
|---|---|
idle | Ready to request permission |
requesting | Permission dialog shown |
denied | User denied camera access |
learnMore | Showing help screen |
Handling cancellation (closed)
closed)When the user dismisses the selfie flow — either by calling manager.close() or by clicking the SDK's built-in close button — the manager transitions to closed. closed is a final state: the manager won't re-emit further updates from there, and reset() doesn't apply (it's only valid from finished or error). Custom UIs need to decide what to do next.
The two common patterns:
Inside an orchestrated flow — call flowManager.completeModule() to skip the selfie step and let the orchestrator advance:
selfieManager.subscribe((state) => {
if (state.status === 'closed') {
// Optional: render a brief "Cancelled — continuing…" screen, then advance.
setTimeout(() => flowManager.completeModule(), 800);
}
});Standalone (no orchestrator) — there's nowhere to advance to, so navigate the user out of the selfie surface entirely:
selfieManager.subscribe((state) => {
if (state.status === 'closed') {
selfieManager.stop(); // tear down resources
navigate('/onboarding/cancelled');
}
});Common UX choice: render a brief "Cancelled — continuing…" screen for ~800 ms before calling flowManager.completeModule(), so the cancellation isn't jarring.
API Methods
| Method | Description | When to Use |
|---|---|---|
load() | Starts the selfie flow | Always call first |
nextStep() | Advances from tutorial to permissions | When tutorial |
requestPermission() | Requests camera access | When permissions.idle or permissions.learnMore |
goToLearnMore() | Shows permission help screen | When permissions.idle |
back() | Goes back from learn more | When permissions.learnMore |
capture() | Manual capture trigger | When detectionStatus === 'manualCapture' |
retryCapture() | Retry after upload error | When captureStatus === 'uploadError' |
close() | Close the flow | Anytime |
reset() | Reset to initial state | After finished or error |
stop() | Cleanup resources | When unmounting |
getState() | Returns current state | Anytime |
subscribe(callback) | Subscribe to state changes | Returns unsubscribe function |
React Example
import { useState, useEffect, useRef } from 'react';
import { createSelfieManager, type SelfieState } from '@incodetech/core/selfie';
function CustomSelfie() {
const videoRef = useRef<HTMLVideoElement>(null);
const [manager] = useState(() => createSelfieManager({
config: { showTutorial: true, autoCaptureTimeout: 10, validateLenses: true },
}));
const [state, setState] = useState<SelfieState>({ status: 'idle' });
useEffect(() => {
const unsubscribe = manager.subscribe(setState);
manager.load();
return () => { unsubscribe(); manager.stop(); };
}, [manager]);
useEffect(() => {
if (state.status === 'capture' && state.stream && videoRef.current) {
videoRef.current.srcObject = state.stream;
}
}, [state]);
switch (state.status) {
case 'tutorial':
return (
<div>
<h2>Take a Selfie</h2>
<ul>
<li>Ensure good lighting</li>
<li>Remove glasses and hats</li>
<li>Look directly at the camera</li>
</ul>
<button onClick={() => manager.nextStep()}>Continue</button>
</div>
);
case 'permissions':
if (state.permissionStatus === 'denied') {
return (
<div>
<p>Camera access is required.</p>
<p>Please enable camera in your browser settings.</p>
</div>
);
}
return (
<div>
<p>We need camera access to take your selfie.</p>
<button onClick={() => manager.requestPermission()}>Allow Camera</button>
</div>
);
case 'capture':
return (
<div>
<video ref={videoRef} autoPlay playsInline muted />
<p>{getDetectionMessage(state.detectionStatus)}</p>
{state.captureStatus === 'uploading' && <p>Uploading...</p>}
{state.captureStatus === 'processing' && <p>Processing...</p>}
{state.detectionStatus === 'manualCapture' && (
<button onClick={() => manager.capture()}>Take Photo</button>
)}
{state.captureStatus === 'uploadError' && (
<div>
<p className="error">{state.uploadError}</p>
<button onClick={() => manager.retryCapture()}>Retry</button>
</div>
)}
</div>
);
case 'finished':
return <div>✅ Selfie captured successfully!</div>;
case 'error':
return <div className="error">Error: {state.error}</div>;
default:
return <div>Loading...</div>;
}
}
function getDetectionMessage(status: string): string {
const messages: Record<string, string> = {
idle: 'Preparing camera...',
detecting: 'Detecting face...',
noFace: 'Position your face in the frame',
tooManyFaces: 'Only one face should be visible',
tooFar: 'Move closer',
tooClose: 'Move back',
blur: 'Hold still, image is blurry',
dark: 'Improve lighting conditions',
faceAngle: 'Face your camera directly',
centerFace: 'Center your face',
lenses: 'Remove glasses or lenses',
faceMask: 'Remove face mask',
capturing: 'Capturing...',
manualCapture: 'Ready - tap to capture',
};
return messages[status] || 'Detecting face...';
}Capture-only flow
createSelfieCaptureOnlyManager exposes the same state machine and API surface as createSelfieManager, but bypasses Incode's /omni/add/face upload. Instead of submitting the captured frame and waiting on server-side processing, the manager invokes a customer-supplied onCapture(response) callback with the raw face image (and, when local video recording is enabled, the assembled video) and reaches finished locally. Use it when you want to capture in the browser but route the bytes through your own pipeline.
The config is SelfieConfig plus a required onCapture callback — enforced at compile time:
import { setup } from '@incodetech/core';
import { initializeSession } from '@incodetech/core/session';
import {
createSelfieCaptureOnlyManager,
type SelfieCaptureOnlyConfig,
type FaceCaptureOnlyResponse,
} from '@incodetech/core/selfie';
await setup({
apiURL: 'https://demo-api.incodesmile.com',
wasm: { pipelines: ['selfie'] },
});
await initializeSession({ token: 'your-session-token' });
const config: SelfieCaptureOnlyConfig = {
showTutorial: true,
showPreview: false,
assistedOnboarding: false,
enableFaceRecording: false,
autoCaptureTimeout: 10,
captureAttempts: 3,
validateLenses: true,
validateFaceMask: true,
validateHeadCover: true,
validateClosedEyes: true,
validateBrightness: true,
deepsightLiveness: 'SINGLE_FRAME',
onCapture: async (response: FaceCaptureOnlyResponse) => {
const { image } = response;
await uploadToMyBackend(image.blob);
// image.videoBase64 will be defined if enableFaceRecording was true
},
};
const manager = createSelfieCaptureOnlyManager({ config });
manager.subscribe((state) => {
if (state.status === 'finished') manager.stop();
});
manager.load();The FaceCaptureOnlyResponse payload is:
type FaceCaptureOnlyResponse = {
image: FaceCapturedImageData;
};
type FaceCapturedImageData = {
imageBase64: string; // The unprocessed full frame, base64-encoded
blob: Blob; // Same content as Blob
url: string; // Object-URL for direct rendering
metadata: string; // Capture metadata (serialized)
videoBase64: string | undefined; // Local-recording bytes — see note below
};
Sensitive biometric payload. Capture-only delivers unprocessed biometric data directly to your code. The standard selfie flow encrypts the upload payload at the transport layer; capture-only does not — the bytes leave the SDK as plain base64 / Blob and you are responsible for handling and transmitting them securely.When
enableFaceRecording: trueand the SDK is using a local-recording provider,videoBase64is populated with the full assembled video as raw, unencrypted base64. (videoBase64isundefinedwhen recording is off, when a server-side recording provider handles the capture instead, or when video assembly fails.) Treat it with the same care as any other biometric artifact: transport over TLS, restrict storage, observe your jurisdiction's biometric-data retention rules.
createSelfieCaptureOnlyManagerFromActor is also exported for advanced cases where you supply a pre-built XState actor.
Configuration Options
SelfieConfig is FlowModuleConfig['SELFIE'] & BaseFaceCaptureConfig. The FlowModuleConfig half is the dashboard-driven shape and its fields are required from your perspective; BaseFaceCaptureConfig adds a few optional UI overrides.
Orchestrated vs headless: when
<incode-selfie>runs inside<incode-flow>(orcreateOrchestratedFlowManager), the orchestrator passes the fullFlowModuleConfig['SELFIE']from the dashboard automatically — you don't set any of these. The required marks below apply when you instantiate the module yourself viacreateSelfieManager({ config })or set<incode-selfie>.configdirectly.
| Option | Type | Required | Description |
|---|---|---|---|
showTutorial | boolean | ✅ | Show tutorial before capture |
showPreview | boolean | ✅ | Show preview after capture |
assistedOnboarding | boolean | ✅ | Use back camera with no mirror (staff-assisted mode) |
enableFaceRecording | boolean | ✅ | Enable video recording of the capture |
autoCaptureTimeout | number | ✅ | Seconds before auto-capture triggers |
captureAttempts | number | ✅ | Maximum capture attempts |
validateLenses | boolean | ✅ | Reject captures with glasses/lenses |
validateFaceMask | boolean | ✅ | Reject captures with face mask |
validateHeadCover | boolean | ✅ | Reject captures with head coverings |
validateClosedEyes | boolean | ✅ | Reject captures with eyes closed |
validateBrightness | boolean | ✅ | Reject captures with poor lighting |
deepsightLiveness | 'SINGLE_FRAME' | 'MULTIMODAL' | 'VIDEOLIVENESS' | ✅ | Liveness detection mode |
numberOfAttempts | number | ❌ | Legacy alias for captureAttempts. Prefer captureAttempts. |
cameraResolution | { width?: number; height?: number } | ❌ | Preferred camera resolution |
ageAssurance | boolean | ❌ | Show the age-assurance copy variant in the tutorial. Mirrors the flow-level age-assurance flag — when true, the tutorial state surfaces ageAssurance: true so the UI renders the alternate copy. |
onDeviceFaceResultsSubmissionEnabled | boolean | ❌ | Opt-in. When true, face analysis runs entirely on-device via the onDeviceSelfie WASM workflow and only the results are submitted to the server. Video recording is skipped on this path. Requires WASM to be configured at setup(). Leave off to keep the legacy server-side pipeline. |
Troubleshooting
Face Not Detected
- Ensure good lighting (avoid backlighting)
- Remove glasses or hats if possible
- Position face within the outline
- Check WASM is properly initialized
Camera Issues
- Ensure HTTPS (camera requires secure context)
- Check browser permissions
- Try closing other apps using camera
- See Troubleshooting – Camera Issues
Upload Errors
- Check network connectivity
- Verify session token is valid
- Check WASM files are accessible
See Also
- Headless Mode: Complete headless API reference
- WASM Configuration: Setting up WebAssembly
- Individual Modules: Overview of all modules
Updated about 6 hours ago
