Skip to content

API-proposal-Network‑Authenticated-Assembly#305

Open
abd-abhisek wants to merge 6 commits intocamaraproject:mainfrom
abd-abhisek:patch-1
Open

API-proposal-Network‑Authenticated-Assembly#305
abd-abhisek wants to merge 6 commits intocamaraproject:mainfrom
abd-abhisek:patch-1

Conversation

@abd-abhisek
Copy link
Copy Markdown
Contributor

#300

What type of PR is this?

Add one of the following kinds:

  • enhancement/feature
  • documentation

What this PR does / why we need it:

This PR introduces a new API proposal https://github.com/camaraproject/APIBacklog/issues/300

Which issue(s) this PR fixes:

https://github.com/camaraproject/APIBacklog/issues/300

Fixes #

Special notes for reviewers:

This PR provides the initial proposal for discussion in the API Backlog Working Group.
Feedback from the community is welcome regarding scope, alignment with existing CAMARA APIs, and potential collaboration

Changelog input

 release-note

Additional documentation

This section can be blank.

docs


### YAML code available?
NO
Sequence diagram is available in deck
Copy link
Copy Markdown
Contributor

@albertoramosmonagas albertoramosmonagas Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add this diagram to the actual PR?

@abd-abhisek
Copy link
Copy Markdown
Contributor Author

abd-abhisek commented Mar 11, 2026 via email

@albertoramosmonagas
Copy link
Copy Markdown
Contributor

Hi @abd-abhisek, thanks for the PR. When you have the diagram and the PowerPoint presentation, can you attach them to this PR? Here are some points we can discuss in the backlog session:

  1. Overlap: batch + k-of-n is not a new capability
    The underlying signal is still “presence in an area within a recent time window”. The proposed delta is mainly batching plus server-side aggregation into a group verdict. Please justify why this must be a new API family instead of a DeviceLocation scope enhancement (group co-presence) reusing existing area/time/error semantics.

  2. Weak differentiation vs Location Verification
    “Location Verification requires exact lat/long” is not a strong differentiator: both approaches rely on a zone definition and return a verification outcome. Please restate the differentiation in normative terms: what cannot be achieved by Location Verification per device plus client-side aggregation, and why this must be standardized server-side for interoperability.

  3. Consent/privacy is underspecified (multi-device inference)
    Group co-presence enables relationship inference (“who was with whom, where, when”). Please specify the authorization/consent model: enterprise-controlled fleet only (single tenant) vs multi-user clusters (user consent / 3-legged), and the data-minimization safeguards.

  4. Subscriptions claim is too vague
    If “continuous checks” remain in scope, “aligns with CloudEvents” is not enough. Please specify the subscription resource model (endpoints, event types/payload, callback behavior, expiry/cancellation, quotas and lifecycle states).

@abd-abhisek
Copy link
Copy Markdown
Contributor Author

Hi @abd-abhisek, thanks for the PR. When you have the diagram and the PowerPoint presentation, can you attach them to this PR? Here are some points we can discuss in the backlog session:

  1. Overlap: batch + k-of-n is not a new capability
    The underlying signal is still “presence in an area within a recent time window”. The proposed delta is mainly batching plus server-side aggregation into a group verdict. Please justify why this must be a new API family instead of a DeviceLocation scope enhancement (group co-presence) reusing existing area/time/error semantics.
  2. Weak differentiation vs Location Verification
    “Location Verification requires exact lat/long” is not a strong differentiator: both approaches rely on a zone definition and return a verification outcome. Please restate the differentiation in normative terms: what cannot be achieved by Location Verification per device plus client-side aggregation, and why this must be standardized server-side for interoperability.
  3. Consent/privacy is underspecified (multi-device inference)
    Group co-presence enables relationship inference (“who was with whom, where, when”). Please specify the authorization/consent model: enterprise-controlled fleet only (single tenant) vs multi-user clusters (user consent / 3-legged), and the data-minimization safeguards.
  4. Subscriptions claim is too vague
    If “continuous checks” remain in scope, “aligns with CloudEvents” is not enough. Please specify the subscription resource model (endpoints, event types/payload, callback behavior, expiry/cancellation, quotas and lifecycle states).

My challenge is I am unable to attach Deck, due to change in security policy possibly at my end. I really appreciate your review comments.
Let me try out add slides for each of the questions. I believe we have taken these into account, however keen to understand your and community perspective and refine as needed.
Thanks again

@abd-abhisek
Copy link
Copy Markdown
Contributor Author

Hi @abd-abhisek, thanks for the PR. When you have the diagram and the PowerPoint presentation, can you attach them to this PR? Here are some points we can discuss in the backlog session:

  1. Overlap: batch + k-of-n is not a new capability
    The underlying signal is still “presence in an area within a recent time window”. The proposed delta is mainly batching plus server-side aggregation into a group verdict. Please justify why this must be a new API family instead of a DeviceLocation scope enhancement (group co-presence) reusing existing area/time/error semantics.
  2. Weak differentiation vs Location Verification
    “Location Verification requires exact lat/long” is not a strong differentiator: both approaches rely on a zone definition and return a verification outcome. Please restate the differentiation in normative terms: what cannot be achieved by Location Verification per device plus client-side aggregation, and why this must be standardized server-side for interoperability.
  3. Consent/privacy is underspecified (multi-device inference)
    Group co-presence enables relationship inference (“who was with whom, where, when”). Please specify the authorization/consent model: enterprise-controlled fleet only (single tenant) vs multi-user clusters (user consent / 3-legged), and the data-minimization safeguards.
  4. Subscriptions claim is too vague
    If “continuous checks” remain in scope, “aligns with CloudEvents” is not enough. Please specify the subscription resource model (endpoints, event types/payload, callback behavior, expiry/cancellation, quotas and lifecycle states).

My challenge is I am unable to attach Deck, due to change in security policy possibly at my end. I really appreciate your review comments. Let me try out add slides for each of the questions. I believe we have taken these into account, however keen to understand your and community perspective and refine as needed. Thanks again

@albertoramosmonagas I was able to upload the presentation, would be great if you and other colleagues can take a look before our next discussion. @bishnu-infy @hemantagogoi-infy @vijaymurthyn FYI Please

@albertoramosmonagas
Copy link
Copy Markdown
Contributor

Hi @abd-abhisek ,

Thanks for uploading the updated deck — good progress, especially on interoperability (SIM/eSIM-only clusters, tags/beacons removed). Key backlog points still need to be closed with concrete, testable semantics:

  1. Scope/overlap: the core still looks like “presence in an area within a recent time window”; the delta reads mainly batch + k-of-n aggregation. Please justify why this must be a new API family vs a DeviceLocation scope enhancement.

  2. Differentiation vs Location Verification: “LV requires exact lat/long” is not a strong differentiator. Please state the delta in normative terms: what cannot be achieved via LV per device + client-side aggregation, and why server-side standardization is required for interoperability.

  3. Consent/privacy: the two-mode framing (enterprise fleet vs multi-user) is a good step, but please define the portable auth/consent rules (especially for multi-user clusters) and the privacy/data-minimization safeguards.

  4. Subscriptions: if “continuous checks” remain in scope, “aligned with CloudEvents” is not enough. Please specify the subscription model concretely (endpoints, event types/payload, callback/retries, expiry/cancellation, quotas/lifecycle, errors). Also, consider whether subscriptions can be deferred until on-demand semantics are validated.

Additionally, please clarify:

  • Zone input: Area vs operator zoneId/discovery.
  • Group verdict semantics: definition of “atomic” and rules for co_present/partial/not_co_present (time tolerance/maxAge, partial failures, device status handling).
  • Attestation token: minimal format/claims, signature verification/key discovery, anti-replay.

@abd-abhisek
Copy link
Copy Markdown
Contributor Author

abd-abhisek commented Apr 7, 2026 via email

@albertoramosmonagas
Copy link
Copy Markdown
Contributor

Hi @abd-abhisek ,

To make sure we’re all aligned before the next discussion, I have a few clarifying questions (mainly to turn the current statements into testable, portable semantics):

  1. LV comparison: DeviceLocation LV also has PARTIAL/UNKNOWN (and match rate). When you say NAA adds “partial” with no LV equivalent, what exactly is the normative gap: k-of-n at group level, “atomic” timing, the attestation token, or something else?
  2. Zone input: the deck says “no coordinates / no zone exposed”, but you also say callers specify the zone in POST /verify (no operator zoneId). Should we assume the caller provides a Commonalities Area object (which implies area coordinates), or some other zone reference? If it’s Area, are you ok to reword the deck to avoid confusion?
  3. “Atomic” verdict: what is the intended normative definition (single evaluation timestamp vs time window with maxAge/timeTolerance per device)? How should unreachable/transient failures be treated (absent vs unknown) and reported?
  4. Multi-user consent: how do you plan to collect/verify consent for multiple device owners in an interoperable way (given typical OAuth tokens are single-subject)? Would you consider scoping v0.x to enterprise fleet first and add multi-user later?
  5. Subscriptions: do you expect the subscription resource (endpoints, lifecycle, payload, retries, expiry/cancel, errors) to follow the CAMARA event subscription template one-to-one, or any deviations?

Happy to review any updated text/deck changes or an early draft OAS snippet if you have one.

@abd-abhisek
Copy link
Copy Markdown
Contributor Author

abd-abhisek commented Apr 8, 2026 via email

@albertoramosmonagas
Copy link
Copy Markdown
Contributor

Hi @abd-abhisek,

I agree “partial” in NAA is a group completeness concept rather than LV’s single-device confidence. One quick clarification to avoid semantic ambiguity: in your example (N=4, k=3, m=3 present) you mention NAA would return “partial” even though the threshold is met. Does that mean:

  • co_present = all N present, and partial = threshold met but not all present?
    or
  • co_present = threshold met (m ≥ k), and partial = threshold not met but some present (0 < m < k)?

Could you please confirm the intended truth table (verdict vs m/N/k) and update the deck/PR text accordingly? Also, will the response include at least requiredCount/totalCount/presentCount so relying parties can interpret “partial” consistently across operators?

@abd-abhisek
Copy link
Copy Markdown
Contributor Author

abd-abhisek commented Apr 8, 2026 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants