Selfie Module

The Selfie module captures a user's face with ML-powered liveness detection to prevent spoofing.

📘

This guide is specific to Web SDK 2.0. If you are still using 1.x, you can find documentation here. We strongly recommend upgrading - contact your Incode Representative for upgrade information.

The Selfie module captures a user's face with ML-powered liveness detection to prevent spoofing.

Follows the camera-capture pattern. See that page for the shared manager lifecycle, capture sub-states, and skeleton; the rest of this page covers Selfie-specific config, detection statuses, and methods.

Tag

<incode-selfie> is a standard Web Component. Importing the UI subpath registers the custom element; importing the CSS applies the module's styles.

import '@incodetech/web/selfie';
import '@incodetech/web/selfie/styles.css';

Properties

Set these as JavaScript properties on the element (not as HTML attributes):

PropertyTypeRequiredDescription
configSelfieConfigConfiguration options (validation flags, modes)
onFinish() => voidCalled when capture completes successfully
onError(error: string) => voidCalled when an error occurs

WASM Requirements

The Selfie module uses WebAssembly for face detection and liveness analysis. Pre-warm WASM during setup() so models are ready before the user reaches the camera step:

await setup({
  apiURL: 'https://demo-api.incodesmile.com',
  token: 'your-session-token',
  wasm: { pipelines: ['selfie'] },
});

See WASM Configuration for self-hosted paths and the lower-level warmupWasm() API.

Usage

Vanilla HTML / TypeScript

<incode-selfie></incode-selfie>

<script type="module">
  import { setup } from '@incodetech/core';
  import '@incodetech/web/selfie';
  import '@incodetech/web/selfie/styles.css';

  await setup({
    apiURL: 'https://demo-api.incodesmile.com',
    token: 'your-session-token',
    wasm: { pipelines: ['selfie'] },
  });

  const selfie = document.querySelector('incode-selfie');
  selfie.onFinish = () => console.log('Selfie captured!');
  selfie.onError = (err) => console.error('Selfie error:', err);
</script>

React

React 18 or earlier: add the one-time JSX augmentation from Framework Integration → TypeScript: JSX support for incode-* tags. React 19+ doesn't need it, and can also use the simpler form from Framework Integration → React 19+ shortcut.

import { useEffect, useRef } from 'react';
import { setup } from '@incodetech/core';
import type { SelfieConfig } from '@incodetech/core/selfie';
import '@incodetech/web/selfie';
import '@incodetech/web/selfie/styles.css';

type SelfieElement = HTMLElement & {
  config?: SelfieConfig;
  onFinish: () => void;
  onError: (error: string) => void;
};

await setup({
  apiURL: 'https://demo-api.incodesmile.com',
  token: 'your-session-token',
  wasm: { pipelines: ['selfie'] },
});

export function SelfieCapture() {
  const ref = useRef<SelfieElement>(null);

  useEffect(() => {
    const el = ref.current;
    if (!el) return;
    el.onFinish = () => console.log('Selfie captured!');
    el.onError = (err) => console.error('Selfie error:', err);
  }, []);

  return <incode-selfie ref={ref} />;
}

For Angular (CUSTOM_ELEMENTS_SCHEMA) and Vue (compilerOptions.isCustomElement) setup, see Framework Integration.


Headless Mode

For complete UI control, use the createSelfieManager from @incodetech/core/selfie.

Quick Start

import { setup } from '@incodetech/core';
import { createSelfieManager } from '@incodetech/core/selfie';
import { warmupWasm } from '@incodetech/core/wasm';

await setup({
  apiURL: 'https://demo-api.incodesmile.com',
  token: 'your-session-token',
});

await warmupWasm({
  wasmPath: '/wasm/webLib.wasm',
  glueCodePath: '/wasm/webLib.js',
  modelsBasePath: '/wasm/models',
  pipelines: ['selfie'],
});

const manager = createSelfieManager({
  config: {
    showTutorial: true,
    autoCaptureTimeout: 10,
    validateLenses: true,
    validateFaceMask: true,
  },
});

manager.subscribe((state) => {
  console.log('Status:', state.status);

  if (state.status === 'capture') {
    console.log('Detection:', state.detectionStatus);
    console.log('Stream ready:', !!state.stream);
  }

  if (state.status === 'finished') {
    console.log('Selfie captured!', state.processResponse);
    manager.stop();
  }
});

manager.load();

State Machine Flow

flowchart LR
    idle -->|load| tutorial
    tutorial -->|nextStep| permissions
    permissions -->|granted| capture
    capture -->|upload done| processing
    processing -->|success| finished

States Reference

StatusDescriptionKey Properties
idleInitial state, waiting for load()
loadingChecking permissions (when no tutorial)
tutorialShowing tutorialageAssurance (boolean; mirrors config.ageAssurance and selects the age-assurance copy variant)
permissionsCamera permission handlingpermissionStatus
captureCamera active, detecting facestream, captureStatus, detectionStatus, attemptsRemaining
processingServer-side processing of the captured selfie
finishedCapture completeprocessResponse?
closedUser closed the flow
errorFatal error occurrederror

Capture State Properties

When status === 'capture':

PropertyTypeDescription
streamCameraStreamCamera stream for <video> element
captureStatusstring'initializing', 'detecting', 'capturing', 'uploading', 'uploadError', 'success'
detectionStatusDetectionStatusFace detection feedback (see below)
attemptsRemainingnumberRemaining capture attempts
uploadErrorstring?Error message if upload failed
assistedOnboardingbooleanWhether assisted onboarding mode is active
debugFrameImageData?Latest processed frame (for debugging)

Note: processing is a separate top-level state (status === 'processing'), not a captureStatus value.

Detection Status Values

StatusUser Instruction
idle"Preparing camera..."
detecting"Detecting face..."
noFace"Position your face in the frame"
tooManyFaces"Only one face should be visible"
tooClose"Move back"
tooFar"Move closer"
blur"Hold still, image is blurry"
dark"Improve lighting conditions"
faceAngle"Face your camera directly"
headWear"Remove head coverings"
lenses"Remove glasses or lenses"
eyesClosed"Open your eyes"
faceMask"Remove face mask"
centerFace"Center your face"
getReady"Get ready..."
getReadyFinished"Hold still..."
capturing"Capturing photo..."
manualCapture"Tap to capture"
offline"No network connection"

Permission Status Values

When status === 'permissions':

permissionStatusDescription
idleReady to request permission
requestingPermission dialog shown
deniedUser denied camera access
learnMoreShowing help screen

Handling cancellation (closed)

When the user dismisses the selfie flow — either by calling manager.close() or by clicking the SDK's built-in close button — the manager transitions to closed. closed is a final state: the manager won't re-emit further updates from there, and reset() doesn't apply (it's only valid from finished or error). Custom UIs need to decide what to do next.

The two common patterns:

Inside an orchestrated flow — call flowManager.completeModule() to skip the selfie step and let the orchestrator advance:

selfieManager.subscribe((state) => {
  if (state.status === 'closed') {
    // Optional: render a brief "Cancelled — continuing…" screen, then advance.
    setTimeout(() => flowManager.completeModule(), 800);
  }
});

Standalone (no orchestrator) — there's nowhere to advance to, so navigate the user out of the selfie surface entirely:

selfieManager.subscribe((state) => {
  if (state.status === 'closed') {
    selfieManager.stop(); // tear down resources
    navigate('/onboarding/cancelled');
  }
});

Common UX choice: render a brief "Cancelled — continuing…" screen for ~800 ms before calling flowManager.completeModule(), so the cancellation isn't jarring.

API Methods

MethodDescriptionWhen to Use
load()Starts the selfie flowAlways call first
nextStep()Advances from tutorial to permissionsWhen tutorial
requestPermission()Requests camera accessWhen permissions.idle or permissions.learnMore
goToLearnMore()Shows permission help screenWhen permissions.idle
back()Goes back from learn moreWhen permissions.learnMore
capture()Manual capture triggerWhen detectionStatus === 'manualCapture'
retryCapture()Retry after upload errorWhen captureStatus === 'uploadError'
close()Close the flowAnytime
reset()Reset to initial stateAfter finished or error
stop()Cleanup resourcesWhen unmounting
getState()Returns current stateAnytime
subscribe(callback)Subscribe to state changesReturns unsubscribe function

React Example

import { useState, useEffect, useRef } from 'react';
import { createSelfieManager, type SelfieState } from '@incodetech/core/selfie';

function CustomSelfie() {
  const videoRef = useRef<HTMLVideoElement>(null);
  const [manager] = useState(() => createSelfieManager({
    config: { showTutorial: true, autoCaptureTimeout: 10, validateLenses: true },
  }));
  const [state, setState] = useState<SelfieState>({ status: 'idle' });

  useEffect(() => {
    const unsubscribe = manager.subscribe(setState);
    manager.load();
    return () => { unsubscribe(); manager.stop(); };
  }, [manager]);

  useEffect(() => {
    if (state.status === 'capture' && state.stream && videoRef.current) {
      videoRef.current.srcObject = state.stream;
    }
  }, [state]);

  switch (state.status) {
    case 'tutorial':
      return (
        <div>
          <h2>Take a Selfie</h2>
          <ul>
            <li>Ensure good lighting</li>
            <li>Remove glasses and hats</li>
            <li>Look directly at the camera</li>
          </ul>
          <button onClick={() => manager.nextStep()}>Continue</button>
        </div>
      );

    case 'permissions':
      if (state.permissionStatus === 'denied') {
        return (
          <div>
            <p>Camera access is required.</p>
            <p>Please enable camera in your browser settings.</p>
          </div>
        );
      }
      return (
        <div>
          <p>We need camera access to take your selfie.</p>
          <button onClick={() => manager.requestPermission()}>Allow Camera</button>
        </div>
      );

    case 'capture':
      return (
        <div>
          <video ref={videoRef} autoPlay playsInline muted />
          <p>{getDetectionMessage(state.detectionStatus)}</p>
          {state.captureStatus === 'uploading' && <p>Uploading...</p>}
          {state.captureStatus === 'processing' && <p>Processing...</p>}
          {state.detectionStatus === 'manualCapture' && (
            <button onClick={() => manager.capture()}>Take Photo</button>
          )}
          {state.captureStatus === 'uploadError' && (
            <div>
              <p className="error">{state.uploadError}</p>
              <button onClick={() => manager.retryCapture()}>Retry</button>
            </div>
          )}
        </div>
      );

    case 'finished':
      return <div>✅ Selfie captured successfully!</div>;

    case 'error':
      return <div className="error">Error: {state.error}</div>;

    default:
      return <div>Loading...</div>;
  }
}

function getDetectionMessage(status: string): string {
  const messages: Record<string, string> = {
    idle: 'Preparing camera...',
    detecting: 'Detecting face...',
    noFace: 'Position your face in the frame',
    tooManyFaces: 'Only one face should be visible',
    tooFar: 'Move closer',
    tooClose: 'Move back',
    blur: 'Hold still, image is blurry',
    dark: 'Improve lighting conditions',
    faceAngle: 'Face your camera directly',
    centerFace: 'Center your face',
    lenses: 'Remove glasses or lenses',
    faceMask: 'Remove face mask',
    capturing: 'Capturing...',
    manualCapture: 'Ready - tap to capture',
  };
  return messages[status] || 'Detecting face...';
}

Capture-only flow

createSelfieCaptureOnlyManager exposes the same state machine and API surface as createSelfieManager, but bypasses Incode's /omni/add/face upload. Instead of submitting the captured frame and waiting on server-side processing, the manager invokes a customer-supplied onCapture(response) callback with the raw face image (and, when local video recording is enabled, the assembled video) and reaches finished locally. Use it when you want to capture in the browser but route the bytes through your own pipeline.

The config is SelfieConfig plus a required onCapture callback — enforced at compile time:

import { setup } from '@incodetech/core';
import { initializeSession } from '@incodetech/core/session';
import {
  createSelfieCaptureOnlyManager,
  type SelfieCaptureOnlyConfig,
  type FaceCaptureOnlyResponse,
} from '@incodetech/core/selfie';

await setup({
  apiURL: 'https://demo-api.incodesmile.com',
  wasm: { pipelines: ['selfie'] },
});
await initializeSession({ token: 'your-session-token' });

const config: SelfieCaptureOnlyConfig = {
  showTutorial: true,
  showPreview: false,
  assistedOnboarding: false,
  enableFaceRecording: false,
  autoCaptureTimeout: 10,
  captureAttempts: 3,
  validateLenses: true,
  validateFaceMask: true,
  validateHeadCover: true,
  validateClosedEyes: true,
  validateBrightness: true,
  deepsightLiveness: 'SINGLE_FRAME',
  onCapture: async (response: FaceCaptureOnlyResponse) => {
    const { image } = response;
    await uploadToMyBackend(image.blob);
    // image.videoBase64 will be defined if enableFaceRecording was true
  },
};

const manager = createSelfieCaptureOnlyManager({ config });
manager.subscribe((state) => {
  if (state.status === 'finished') manager.stop();
});
manager.load();

The FaceCaptureOnlyResponse payload is:

type FaceCaptureOnlyResponse = {
  image: FaceCapturedImageData;
};

type FaceCapturedImageData = {
  imageBase64: string;             // The unprocessed full frame, base64-encoded
  blob: Blob;                      // Same content as Blob
  url: string;                     // Object-URL for direct rendering
  metadata: string;                // Capture metadata (serialized)
  videoBase64: string | undefined; // Local-recording bytes — see note below
};
⚠️

Sensitive biometric payload. Capture-only delivers unprocessed biometric data directly to your code. The standard selfie flow encrypts the upload payload at the transport layer; capture-only does not — the bytes leave the SDK as plain base64 / Blob and you are responsible for handling and transmitting them securely.

When enableFaceRecording: true and the SDK is using a local-recording provider, videoBase64 is populated with the full assembled video as raw, unencrypted base64. (videoBase64 is undefined when recording is off, when a server-side recording provider handles the capture instead, or when video assembly fails.) Treat it with the same care as any other biometric artifact: transport over TLS, restrict storage, observe your jurisdiction's biometric-data retention rules.

createSelfieCaptureOnlyManagerFromActor is also exported for advanced cases where you supply a pre-built XState actor.


Configuration Options

SelfieConfig is FlowModuleConfig['SELFIE'] & BaseFaceCaptureConfig. The FlowModuleConfig half is the dashboard-driven shape and its fields are required from your perspective; BaseFaceCaptureConfig adds a few optional UI overrides.

Orchestrated vs headless: when <incode-selfie> runs inside <incode-flow> (or createOrchestratedFlowManager), the orchestrator passes the full FlowModuleConfig['SELFIE'] from the dashboard automatically — you don't set any of these. The required marks below apply when you instantiate the module yourself via createSelfieManager({ config }) or set <incode-selfie>.config directly.

OptionTypeRequiredDescription
showTutorialbooleanShow tutorial before capture
showPreviewbooleanShow preview after capture
assistedOnboardingbooleanUse back camera with no mirror (staff-assisted mode)
enableFaceRecordingbooleanEnable video recording of the capture
autoCaptureTimeoutnumberSeconds before auto-capture triggers
captureAttemptsnumberMaximum capture attempts
validateLensesbooleanReject captures with glasses/lenses
validateFaceMaskbooleanReject captures with face mask
validateHeadCoverbooleanReject captures with head coverings
validateClosedEyesbooleanReject captures with eyes closed
validateBrightnessbooleanReject captures with poor lighting
deepsightLiveness'SINGLE_FRAME' | 'MULTIMODAL' | 'VIDEOLIVENESS'Liveness detection mode
numberOfAttemptsnumberLegacy alias for captureAttempts. Prefer captureAttempts.
cameraResolution{ width?: number; height?: number }Preferred camera resolution
ageAssurancebooleanShow the age-assurance copy variant in the tutorial. Mirrors the flow-level age-assurance flag — when true, the tutorial state surfaces ageAssurance: true so the UI renders the alternate copy.
onDeviceFaceResultsSubmissionEnabledbooleanOpt-in. When true, face analysis runs entirely on-device via the onDeviceSelfie WASM workflow and only the results are submitted to the server. Video recording is skipped on this path. Requires WASM to be configured at setup(). Leave off to keep the legacy server-side pipeline.

Troubleshooting

Face Not Detected

  • Ensure good lighting (avoid backlighting)
  • Remove glasses or hats if possible
  • Position face within the outline
  • Check WASM is properly initialized

Camera Issues

Upload Errors

  • Check network connectivity
  • Verify session token is valid
  • Check WASM files are accessible

See Also