Difference between revisions of "Np1sec/incremental consistency"

(replace - by *)
Line 44: Line 44:
  
 
It is harder to be sure that time-based rejection conditions would be globally-consistent. Of course transcript consistency still breaks in that case, but it is *more likely* to break, and I think reducing this chance would be better.
 
It is harder to be sure that time-based rejection conditions would be globally-consistent. Of course transcript consistency still breaks in that case, but it is *more likely* to break, and I think reducing this chance would be better.
 +
 +
[[Category: mpOTR]]

Revision as of 18:06, 5 September 2014

Define:

A message m is "fully-acked" (from the POV of a given member u) iff, for all recipients r, u has accepted a message by r whose sender-parent is >= m's seqnum.

  • "recipients" possibly includes u, but certainly excludes m's sender
  • "accepted" means delivered locally, i.e. received, then decrypted-verified including parent hash checks; all previous messages must already be accepted

Consistency checks are triggered as follows:

  • when a member u wants to part, they send a "farewell" message m:
    • everyone should explicit-ack this message ASAP
      • this message should contain the next-sender-key, to be used after u leaves, encrypted to everyone except u. (Hopefully this addresses the concern Joe brought up.)
      • could probably do something similar for group keys
    • when this message is fully-acked, u gains consistency for all previous messages up to m, and may leave
      • other messages should be probably be discarded, u won't have a chance to verify their consistency.
    • TBD: need to think about simultaneous parts
  • when a member u accepts a non-explicit-ack message m at time t
    • if u did not send m, and they have not acked m by t+MAX_GRACE, they should send an explicit-ack
    • if m is not fully-acked (from their POV) by t+(2*MAX_RTT)+(k*MAX_GRACE) (k slightly > 1) then issue a local UI warning. Cancel the warning if/when full-ack is reached later.
  • MAX_RTT should be based on the transport
  • MAX_GRACE should be based on expected user communication rate

This guarantees that a warning shows up if we don't reach consistency within the timeout defined above, ensuring timeliness.

In terms of overhead, effectively a user will send a message at least every MAX_GRACE time period, whilst the session has other people talking. When there is a lull in the conversation, there should be no further messages.

I'm confident we can tweak the parameters so servers don't see too much extra load, but have not tried to model this precisely.

We are still vulnerable to a "drop everything" attack, but that can't be helped unless we have unconditionally-periodic heartbeats. Not sure if we want to put these in the upcoming spec.

(The above is basically my msg-notes stuff, except assuming reliable transport, without recovery or flow control, without heartbeats, and adapted to a server-dictated ordering.)

  1. Timeliness

Trevor was also worried about messages appearing in wildly-different positions in a linear order, due to malicious server delaying messages. I didn't like the MAX_RTD idea for various reasons; here is an alternative:

A message m is *rejected* if its sender-parent's seqnum is < (m's seqnum - MAX_UNSYNC_COUNT)

  • MAX_UNSYNC_COUNT may either be constant, or a linear function of the number of members, TBD.

Seqnums are re-defined to ignore *rejected* messages.

This definition should hopefully be globally consistent (or else transcript consistency breaks) so it's easier to reason about, and the sender of the rejected message can detect this reliably and offer the user to re-send.

It is harder to be sure that time-based rejection conditions would be globally-consistent. Of course transcript consistency still breaks in that case, but it is *more likely* to break, and I think reducing this chance would be better.