Consent, Retention, Access Control, Logs, and the Vendor Checklist That Actually Clears Procurement
(written in a senior “industry + risk” consulting voice, with IndoAI positioning woven in subtly)
Table of contents
Why this topic has become a deal-maker, not a legal footnote
India’s CCTV market is scaling fast, and “AI analytics” is moving from experiments to procurement. Independent market estimates put India’s video surveillance market at about USD 2.0B in 2024, with projections of USD 5.26B by 2030 and a ~17% CAGR range depending on methodology. But the bigger change is not market size. It is buyer psychology.
After the Digital Personal Data Protection Act, 2023 (DPDP) and the notified DPDP Rules, 2025, face recognition is now treated by enterprises and government buyers as a high-impact risk system. The logic is straightforward:
- A face template is not like a badge. If it leaks, you cannot “rotate” your face like a password.
- At scale, misidentification becomes an HR incident, a customer grievance, or a reputational event.
- Under DPDP, security and breach-handling have direct penalty exposure, and boards don’t like “we’ll handle it later.”
This is why many face recognition pilots in India do not fail in a demo. They fail in legal review, CISO review, or procurement diligence.A useful constitutional anchor is the Supreme Court’s privacy verdict in Puttaswamy (2017), which held privacy is a fundamental right. In practical terms, any identity-at-scale system is expected to be necessary, proportionate, and accountable, even if those exact words are not used in your RFP.
The DPDP “numbers” that change decisions
Most compliance conversations stay abstract. Procurement does not.
- Penalties are material
Public summaries of the DPDP penalty schedule highlight penalties up to ₹250 crore for failure to take reasonable security safeguards to prevent personal data breaches, and up to ₹200 crore for certain categories of non-compliance (including children-related obligations). - Breach response expectations are time-bound
The DPDP Rules, 2025 create an operational expectation: inform affected individuals and the Board “without delay,” and provide a detailed report within 72 hours (or longer if allowed by the Board). - Logs are not optional in practice
Rule interpretations and guidance consistently call out retention of logs for at least one year in the context of accountability and breach detection/investigation.
Once you internalize these three points, you can see why face recognition projects increasingly move toward “governable architectures,” especially on-prem or edge-controlled deployments.
The real DPDP test for face recognition CCTV deployments
Buyers don’t ask “Are you DPDP compliant?” They ask questions that imply it:
- Can you prove consent and purpose limitation?
- Can you show retention and deletion evidence?
- Can you show access control with separation of duties?
- Can you produce audit logs of every search and export?
- If a breach happens on a Friday night, can you assemble the report by Monday?
If your platform cannot demonstrate these in a live walkthrough, it is usually blocked.This is where edge-first designs are winning trust. Not because cloud is “bad,” but because the control boundary is clearer: data stays local, identity processing is constrained, and the customer can own the governance. This is also why platforms like IndoAI tend to position face recognition as a controlled module inside an on-prem video intelligence stack, rather than a black-box service.
Architecture that survives legal and procurement scrutiny
A senior buyer expects you to separate four data classes and govern them differently:
1) Video footage (raw stream and recordings)
- Purpose: security monitoring, incident evidence
- Typical retention in India enterprise deployments: often 7–30 days, longer only when a case is flagged or another law requires longer.
- Key control: export controls and auditability.
2) Face templates (embeddings) and identity records
This is the high-impact dataset.
- Purpose: identification, access control, watchlist matching
- Key controls: encryption, strict RBAC, minimal retention, deletion proofs, and human oversight.
3) Alerts and operational metadata
- Purpose: workflows such as entry event logs, escalations, exception reporting
- Key control: prevent “purpose creep” into profiling.
4) Audit logs and system logs
- Purpose: accountability and incident response
- Expectation: retained long enough to support investigations and DPDP reporting timelines, commonly referenced as a one-year baseline.
If you want a practical benchmark of how the Indian state describes biometric handling, look at DigiYatra. The DigiYatra policy states facial data should not be stored longer than the passenger’s transit duration and should be purged 24 hours after departure, with consent explicitly collected.
Even though DigiYatra is not your deployment, the buyer takeaway is powerful: biometric retention must be aggressively bounded.
Consent that holds up in Indian environments
In Indian deployments, the consent issue is rarely philosophical. It is operational:
- In a housing society, residents will push back if visitors are forced into biometric collection.
- In an enterprise, unions or employee committees may object if attendance becomes covert monitoring.
- In schools, parental consent expectations become strict, and scrutiny increases.
The defensible pattern that clears reviews is a dual-path entry design:
- Face recognition is opt-in where consent is required.
- A non-biometric alternative exists (card, QR, OTP, manual verification), so consent is not coerced.
Real-world example: large corporate campuses often permit employees to opt into facial entry for speed, while contractors use QR-based passes. It reduces friction and preserves consent validity.
IndoAI-style positioning that works subtly here: “We support both biometric and non-biometric access paths by design, so sites can meet local policy and consent realities without breaking operations.”
Retention and deletion: where most deployments fail quietly
Retention is where “AI demos” die.
A DPDP-defensible approach is not “retain less.” It is “retain with proof.”
Practical examples that buyers recognize:
- Airport-like retention discipline (DigiYatra reference)
Biometric data purged on a short timeline and consent explicit. - Resident lifecycle deletion
A housing society enrolls residents for access control, but the real audit question is: when a tenant moves out, can the committee see deletion evidence? - Employee exit deletion
In factories and enterprises, HR offboarding should trigger deletion of face templates within a defined SLA, while keeping event logs for audit. - Visitor templates avoided
In hospitals, visitor traffic is high. Mature deployments avoid persisting visitor face templates, using short-lived tokens or manual verification to reduce biometric retention risk. - Watchlists with expiry
Retail and mall security sometimes maintain a “known offender” watchlist. A DPDP-aligned governance model demands inclusion criteria, review cadence, and expiry, rather than permanent lists.
Rule-of-thumb governance that gets approved: video can have a business retention window; face templates must be relationship-bound, and deletion must be demonstrable.
Access control and auditability: the trust anchor
This is where senior procurement teams get concrete.
They typically want to see:
- Role-based permissions where operators cannot run face searches casually
- Separation of duties so the same person cannot change retention and export data
- Mandatory reasons or ticket IDs for exports
- Immutable logs that show who searched, when, why, and what was returned
This is not just “good practice.” It’s what enables the 72-hour breach reporting expectation.
A relevant public signal in India is the Supreme Court’s emphasis on oversight for CCTV footage in sensitive contexts (for example, police stations), including independent oversight mechanisms. The direction is clear: surveillance without accountability is increasingly unacceptable.
Accuracy, false matches, and why governance must include “human-in-loop”
A serious buyer will ask: “What happens when it’s wrong?”
NIST’s FRVT program defines performance concepts using false match rates and thresholds, and also evaluates demographic effects in reports. The procurement implication is not “which model is best.” It is:
- You must be able to tune thresholds by risk level
- You must treat face recognition as decision support for adverse actions
- You must define operational handling of false positives and false negatives
Real-world example: in hospitals and schools, a false match causing denial of entry is not acceptable. The system must fall back to manual verification gracefully. In contrast, for a controlled factory zone, you may accept tighter thresholds but still require confirmation before escalation.
IndoAI positioning that lands well: “We design operational workflows where the AI triggers an alert and the human confirms before action, and we log the full chain.”
Vendor due diligence buyers now run (and how to pass it)
This is where IndoAI’s interest should be visible but not salesy. The language buyers like is: “Here’s what we can prove.”
A DPDP-ready vendor pack typically includes:
- Data flow map showing where video, face templates, alerts, and logs live
- Consent workflow, notice templates, opt-out path, and alternative access path
- Retention schedule per data class, deletion SLAs, and deletion evidence reports
- RBAC matrix and separation of duties model
- Audit log examples for search, export, admin changes
- Incident response playbook aligned to “without delay” notification and 72-hour detail expectation
- Contract commitments: no secondary use, no training on customer data by default, sub-processor transparency
Real-world procurement reality: with India tightening cyber and security scrutiny around connected CCTV equipment and testing regimes, buyers have become more sensitive to where data flows and who can access it. Strong on-prem governance and minimised data movement becomes a commercial advantage, not a constraint.
10 real-life examples blended into buyer logic
These are not “case studies.” They are the kinds of scenarios that appear in Indian projects:
- Airports: DigiYatra sets an expectation of explicit consent and rapid purge, which becomes a mental model for biometric restraint.
- Policing: Delhi Police has publicly referenced facial recognition integration for missing children/persons, which increases public sensitivity and demand for controls.
- Police station CCTV oversight: Supreme Court directions emphasize oversight and accountability for CCTV, reinforcing “logs and governance” expectations.
- IT parks: employee opt-in face entry plus contractor QR entry to preserve valid consent.
- Housing societies: resident enrollment with strict visitor non-biometric flow to avoid coercion claims.
- Hospitals: restricted-zone staff FR with manual fallback, visitor FR avoided to minimize biometric storage.
- Factories: access control for high-risk zones where audit logs matter for safety incident investigations.
- Retail: exception watchlists used sparingly with expiry and review cadence to avoid profiling allegations.
- Schools: strict parental consent expectations and strong pressure for opt-out alternatives; most successful deployments use FR only for staff, not children, unless governance is extremely strong.
- Multi-site enterprises: centralized policy but local retention and access controls, with consistent logs across sites so incident response is feasible within 72 hours.
FAQs
1) Is face recognition data considered “personal data” under DPDP?
2) Does “on-prem” automatically make us DPDP compliant?
3) Do we always need consent for face recognition?
4) What does “valid consent” look like in a CCTV environment?
– clear notice at entry points
– a digital notice that explains purpose, data types, retention, and withdrawal
– a recorded consent artefact linked to notice version
– an alternative access method for opt-out users
– This prevents consent from being challenged as coerced.
5) What retention periods are defensible for video vs face templates?
– video: business retention window, commonly 7–30 days unless incident flagged
– face templates: relationship-bound retention, deletion on exit
– visitors: avoid storing templates unless there is a compelling reason
DigiYatra is often cited internally as an example of tight biometric retention and purge timelines.
6) What is the minimum logging standard buyers expect?
– every face search, match result, and operator identity
– every export of video or images
– every admin change to retention, watchlists, and access control
– This is what enables breach reconstruction and reporting within DPDP Rules timelines.
7) What does DPDP breach handling force us to build?
– detect unauthorized access
– isolate systems
– identify scope and affected individuals
– inform affected individuals and the Board without delay
– compile the detailed report within the 72-hour expectation (unless extended)
– This is why audit logs and access histories become central.
8) How do we handle false matches responsibly?
– conservative thresholds
– step-up verification for sensitive contexts
– human confirmation
NIST FRVT materials formalize false match concepts and show why thresholds matter.
9) What should we insist on contractually from the vendor?
– no secondary use of data
– no training on customer data by default
– deletion SLAs and deletion certificates on termination
– breach support SLAs
– sub-processor transparency
This reduces fiduciary risk.
10) What is the simplest way to build buyer trust fast?
– show RBAC
– show search logs
– show export logs
– show deletion job reports
If a vendor cannot demonstrate these, DPDP will eventually become a blocker.
