Skip to main content
Tuteliq’s fraud prevention doesn’t rely on a single signal. Every verification runs 7 independent cross-referencing layers that compare data from different sources on the same document. A forger must defeat all layers simultaneously — editing one field on the front of a document will be caught by mismatches against the MRZ, barcode, back side, or declared metadata.

Cross-referencing layers

1. MRZ vs. OCR text

When a document contains a Machine Readable Zone (passports, many ID cards), Tuteliq compares the MRZ-extracted fields against the OCR-extracted labels from the visible printed text.
Compared fieldHow it catches forgery
Date of birthMRZ says 1990-06-15 but printed DOB reads 1995-06-15 — front-side text was edited
NameMRZ shows SMITH but printed name shows JONES — name was changed on the visible side
Document numberMRZ number doesn’t match the printed number — number was altered
This is the highest-value fraud signal because the MRZ is protected by ICAO check digits that are difficult to recalculate correctly without specialized knowledge.

2. Barcode vs. OCR text

For US and Canadian driver’s licenses, the PDF417 barcode on the back encodes all personal data independently of the printed text. Most forgers only edit the visual side.
Compared fieldHow it catches forgery
Date of birthBarcode says 1985-03-20 but front OCR reads 1990-06-15 — front was edited, back barcode wasn’t
NameBarcode says JOHNSON but front says SMITH — front name was changed

3. Document front vs. back

When both sides of a document are provided, Tuteliq extracts name and DOB from each side independently and compares them.
SignalWhat it catches
Name mismatchFront says JOHN SMITH, back says JANE DOE — mismatched front/back from different documents
DOB mismatchDifferent dates on front and back — poorly assembled forgery

4. Document type consistency

If the user declares a document type (e.g., “passport”) but the MRZ indicates a different type (e.g., ID card with I< prefix), the mismatch is flagged.
ScenarioSignal
Declared passport, MRZ says I<Document type mismatch
Declared national_id, MRZ says P<Document type mismatch

5. IP vs. document country

Tuteliq compares the document’s country of origin against the geographic origin of the API request. A Brazilian CPF submitted from a Vietnamese IP address isn’t necessarily fraud, but it’s an anomaly worth flagging.
Geographic inconsistency is a soft signal — it generates a flag but doesn’t cause automatic failure. Diaspora populations, travelers, and VPN users can legitimately trigger this. It’s included in failure_reasons for your review logic to handle appropriately.

6. Age consistency (document vs. selfie)

When both a document and selfie are provided, Tuteliq compares the age calculated from the document’s DOB against the age estimated from the selfie by the vision AI. A discrepancy greater than 10 years is flagged.
ScenarioSignal
Document says age 25, selfie suggests ~45The document may belong to someone else
Document says age 60, selfie suggests ~20Using a parent’s or older person’s document

7. OCR confidence gating

When OCR confidence falls below 60%, all extracted data is flagged as potentially unreliable. This prevents the system from making verification decisions based on garbage OCR output from blurry, damaged, or deliberately obscured documents.

Document authenticity analysis

Beyond cross-referencing structured data, Tuteliq uses a vision AI model to analyze the document image itself for signs of forgery.

What the AI checks

AnalysisWhat it detects
Document layoutDoes the layout match known templates for the claimed document type and country? Photo position, text field placement, header/footer elements.
Security featuresVisible holograms, microprint areas, guilloche patterns, watermarks, UV-reactive zones.
Font consistencyAll text fields using expected fonts? Any signs of text overlay or digital editing?
Color and print qualityColors match government-issued document standards? Print quality consistent across the document?
Photo integrationIs the ID photo properly embedded or does it look pasted/overlaid?
Language consistencyDoes the text language match the claimed country of origin?
Edge and border qualityClean, consistent document edges? Signs of cropping or digital manipulation?

Response

{
  "document_authenticity": {
    "is_authentic": true,
    "confidence": 0.94,
    "document_type_detected": "passport",
    "security_features_visible": ["hologram", "microprint", "guilloche_pattern"],
    "anomalies": [],
    "recapture_detected": false,
    "recapture_type": "none",
    "overall_assessment": "Document appears genuine with visible security features"
  }
}

Recapture detection

Recapture is one of the most common fraud vectors — photographing a document displayed on a screen or printed on paper. Tuteliq detects three types of recapture:
TypeDetection method
ScreenMoire patterns (interference between screen pixels and camera sensor), screen bezels visible in frame, unnatural glow/reflections, pixel grid artifacts
PrintoutPaper texture visible, printer dot patterns, flat lighting without natural document sheen, color inconsistency
Photo-of-photoVisible photo edges within the frame, double reflection layers, perspective distortion
Recapture detection triggers a hard failure — the verification status is set to failed, not needs_review.

Failure reasons

Every fraud signal generates a specific, human-readable failure reason:
{
  "failure_reasons": [
    "DOB mismatch: MRZ shows 1990-06-15 but document text shows 1995-06-15 — possible tampering",
    "Document recapture detected (screen): image appears to be a photo of a screen",
    "Geographic inconsistency: document from BRA but request originated from VNM"
  ]
}
These reasons are designed to be:
  • Actionable — your moderation team can understand exactly what went wrong
  • Specific — each reason identifies the exact data points that disagree
  • Auditable — included in the API response for compliance logging

Hard vs. soft failures

SignalSeverityResult
Liveness failedHardfailed
Face mismatchHardfailed
Recapture detectedHardfailed
Document authenticity failed (high confidence)Hardfailed
MRZ check digit invalidSoftneeds_review
Name/DOB cross-reference mismatchSoftneeds_review
Document expiredSoftneeds_review
Low OCR confidenceSoftneeds_review
Geographic inconsistencySoftneeds_review
Age inconsistency (document vs. selfie)Softneeds_review
Use the failure_reasons array to build custom review workflows. For example, you might auto-reject failed results but route needs_review results to a human moderator queue with the specific reasons displayed.

Next steps

Document Checks

45-country document validation and MRZ parsing.

Liveness Detection

How visual liveness prevents spoofing attacks.