Tuteliq’s fraud prevention doesn’t rely on a single signal. Every verification runs 7 independent cross-referencing layers that compare data from different sources on the same document. A forger must defeat all layers simultaneously — editing one field on the front of a document will be caught by mismatches against the MRZ, barcode, back side, or declared metadata.
When a document contains a Machine Readable Zone (passports, many ID cards), Tuteliq compares the MRZ-extracted fields against the OCR-extracted labels from the visible printed text.
Compared field
How it catches forgery
Date of birth
MRZ says 1990-06-15 but printed DOB reads 1995-06-15 — front-side text was edited
Name
MRZ shows SMITH but printed name shows JONES — name was changed on the visible side
Document number
MRZ number doesn’t match the printed number — number was altered
This is the highest-value fraud signal because the MRZ is protected by ICAO check digits that are difficult to recalculate correctly without specialized knowledge.
For US and Canadian driver’s licenses, the PDF417 barcode on the back encodes all personal data independently of the printed text. Most forgers only edit the visual side.
Compared field
How it catches forgery
Date of birth
Barcode says 1985-03-20 but front OCR reads 1990-06-15 — front was edited, back barcode wasn’t
Name
Barcode says JOHNSON but front says SMITH — front name was changed
If the user declares a document type (e.g., “passport”) but the MRZ indicates a different type (e.g., ID card with I< prefix), the mismatch is flagged.
Tuteliq compares the document’s country of origin against the geographic origin of the API request. A Brazilian CPF submitted from a Vietnamese IP address isn’t necessarily fraud, but it’s an anomaly worth flagging.
Geographic inconsistency is a soft signal — it generates a flag but doesn’t cause automatic failure. Diaspora populations, travelers, and VPN users can legitimately trigger this. It’s included in failure_reasons for your review logic to handle appropriately.
When both a document and selfie are provided, Tuteliq compares the age calculated from the document’s DOB against the age estimated from the selfie by the vision AI. A discrepancy greater than 10 years is flagged.
When OCR confidence falls below 60%, all extracted data is flagged as potentially unreliable. This prevents the system from making verification decisions based on garbage OCR output from blurry, damaged, or deliberately obscured documents.
Recapture is one of the most common fraud vectors — photographing a document displayed on a screen or printed on paper. Tuteliq detects three types of recapture:
Type
Detection method
Screen
Moire patterns (interference between screen pixels and camera sensor), screen bezels visible in frame, unnatural glow/reflections, pixel grid artifacts
Printout
Paper texture visible, printer dot patterns, flat lighting without natural document sheen, color inconsistency
Photo-of-photo
Visible photo edges within the frame, double reflection layers, perspective distortion
Recapture detection triggers a hard failure — the verification status is set to failed, not needs_review.
Every fraud signal generates a specific, human-readable failure reason:
{ "failure_reasons": [ "DOB mismatch: MRZ shows 1990-06-15 but document text shows 1995-06-15 — possible tampering", "Document recapture detected (screen): image appears to be a photo of a screen", "Geographic inconsistency: document from BRA but request originated from VNM" ]}
These reasons are designed to be:
Actionable — your moderation team can understand exactly what went wrong
Specific — each reason identifies the exact data points that disagree
Auditable — included in the API response for compliance logging
Use the failure_reasons array to build custom review workflows. For example, you might auto-reject failed results but route needs_review results to a human moderator queue with the specific reasons displayed.