Overview
whatsapp-rust implements the Signal Protocol for end-to-end encryption of both one-on-one and group messages. The implementation is based on Signal’s libsignal library, adapted for WhatsApp’s specific protocol requirements.Architecture
The Signal Protocol implementation is split across two main locations:wacore/libsignal/- Platform-agnostic Signal Protocol core (Rust port of libsignal)src/store/signal*.rs- WhatsApp-specific storage integration with Diesel/SQLite
Key Components
Double Ratchet Protocol
The Double Ratchet algorithm provides forward secrecy and post-compromise security for 1:1 messages.Session Initialization
Two participants initialize a session using Diffie-Hellman key exchange:- Compute shared secrets from ephemeral key exchanges
- Derive root key and chain key using HKDF-SHA256:
- Initialize sender and receiver chains
wacore/libsignal/src/protocol/ratchet.rs:41-172
Message Encryption
Each message advances the sender chain and derives ephemeral message keys:- Load current session state
- Get sender chain key and derive message keys:
- Encrypt plaintext with AES-256-CBC:
- Create SignalMessage with MAC for authentication
- Advance chain key and save session state
- SignalMessage: Standard encrypted message
- PreKeySignalMessage: Includes prekey bundle for session establishment
Message Decryption
Decryption handles out-of-order delivery and tries multiple session states:- Try current session state first
- If MAC verification fails, try previous (archived) sessions
- Derive/retrieve message keys for the counter
- Verify MAC:
- Decrypt with AES-256-CBC
- Promote successful session to current if needed
The implementation optimizes memory by using take/restore patterns to avoid cloning session states during decryption attempts (see
session_cipher.rs:495-619).Chain key ratcheting
Message keys are derived from chain keys, which advance with each message:wacore/libsignal/src/protocol/ratchet/keys.rs
Chain key overflow protection
The chain key index is au32 that increments with each message. Without overflow protection, the index could silently wrap past u32::MAX (4,294,967,295) back to 0, creating a counter reuse vulnerability that breaks cryptographic guarantees (nonce reuse in message key derivation).
Both 1:1 and group chain keys use checked_add() to return a typed error instead of wrapping:
wacore/libsignal/src/protocol/ratchet/keys.rs, wacore/libsignal/src/protocol/sender_keys.rs
Forward Jumps
The protocol tolerates out-of-order messages up to a limit:wacore/libsignal/src/protocol/session_cipher.rs:832-847
DM device fanout
When sending a direct message, the library resolves all known devices for both the recipient and your own account, then encrypts two different plaintexts for two categories of devices:- Recipient devices receive the actual message content
- Own other devices (your other linked devices) receive a
DeviceSentMessagewrapper containing the message plus the destination JID, so your other devices can display the sent message in the correct chat
Device resolution
The DM send path builds the full device list in a WA Web-compliant manner (matchingWAWebSendUserMsgJob and WAWebDBDeviceListFanout):
- Local registry first — the client checks the local device registry via
get_devices_from_registry()for both the recipient and own account. A network fetch (get_user_devices) is only triggered on a cache miss, avoiding unnecessary LID-migration side effects. - Hosted device filtering — devices flagged as hosted (via
is_hosted()) are filtered out, matching WA Web’sDBDeviceListFanoutexclusion. - Sender device exclusion — the exact sender device is removed from the list so
ensure_e2e_sessionsnever creates a self-session. This matches WA Web’sisMeDevicecheck ingetFanOutList. - Self-DM deduplication — when sending to your own account, the recipient and own device lists overlap. A
HashSet-based dedup pass (matching WA Web’sMapkeyed bytoString) removes duplicates.
Device partitioning
Thepartition_dm_devices function splits all resolved devices into recipient and own groups, and excludes the exact sender device (the current device) entirely:
Sender device exclusion
The exact sender device is identified by matching both the user and device ID against your phone number JID (PN) or your Linked Identity JID (LID):Own device recognition
After excluding the sender device, the remaining devices are classified usingmatches_user_or_lid, which checks if a device JID belongs to the same user as either your PN or LID:
DeviceSentMessage plaintext — not the recipient plaintext. Without LID matching, your own LID-based devices would be misclassified as recipient devices, causing them to receive the wrong message format.
Both PN-based and LID-based devices must be checked because WhatsApp’s multi-device architecture uses both addressing schemes. A user’s devices may appear under either their phone number JID (
@s.whatsapp.net) or their Linked Identity JID (@lid), depending on the device type and registration path.PreparedDmStanza
prepare_dm_stanza returns a PreparedDmStanza struct containing the stanza node and the locally computed phash for server ACK validation:
MessageUtils::participant_list_hash(). Unlike group messages, the DM phash is not sent on the wire — WA Web only includes phash in the DeviceSentMessage for groups. The DM phash is used purely for local validation against the server’s ACK to detect device-list drift.
The
DeviceSentMessage.phash field is set to None for DMs, matching WA Web’s behavior where only group DeviceSentMessage wrappers include a phash. The DM phash is computed and tracked separately by the caller.wacore/src/send.rs:675-820, src/send.rs
PN→LID session migration
WhatsApp’s multi-device architecture uses two addressing schemes: phone number JIDs (PN,@s.whatsapp.net) and Linked Identity JIDs (LID, @lid). WhatsApp Web always resolves PN→LID before any session operation via createSignalAddress(). whatsapp-rust mirrors this behavior — when a LID mapping is discovered for a phone number, any Signal sessions stored under the PN address are automatically migrated to the corresponding LID address.
Why migration is needed
After pairing, the primary phone may initially establish sessions under a PN address. Once the LID mapping becomes known (from usync, incoming messages, or device notifications), the phone begins sending from the LID address. Without migration, the client holds a session under the PN address but receives messages addressed to the LID — causingSessionNotFound decryption failures.
Proactive migration at LID discovery
When a new LID-PN mapping is learned (viaadd_lid_pn_mapping), the client scans devices 0–99 for PN-keyed sessions and migrates them. All reads and writes go through the SignalStoreCache rather than the backend directly — this prevents reading stale data when the cache has unflushed mutations (e.g., after SKDM encryption ratcheted the session). The migrated state is flushed to the backend at the end so it survives restarts.
| PN session | LID session | Action |
|---|---|---|
| Exists | Does not exist | Move session and identity from PN→LID address |
| Exists | Exists | Delete stale PN session (LID takes precedence) |
| Does not exist | Any | No action |
On-the-fly migration during decryption
If a message arrives from a LID address and decryption fails withSessionNotFound or InvalidPreKeyId, the client attempts PN→LID migration as a fallback before requesting a retry:
- Look up the PN for the sender’s LID
- Attempt to migrate PN sessions to LID via the signal cache (same cache-first logic as proactive migration)
- Retry decryption with the migrated session (already in the cache — no reload needed)
- If
DuplicatedMessageoccurs during post-migration retry, it is silently ignored - Fall back to retry receipt only if migration does not resolve the issue
InvalidPreKeyId case occurs when a PreKeyMessage references a consumed one-time prekey, but the session actually exists under a PN address (legacy migration). Migrating the session lets Signal use the existing ratchet state instead of looking up the consumed prekey. This migration is attempted in both the identity-change retry path and the initial decryption path.
This ensures existing databases are fixed without requiring re-pairing.
Login-time session check
At login, the client checks the session state of own device 0 (primary phone):- LID session exists — no action needed
- PN session only — logged; migration deferred to first message via on-the-fly path
- No session — will be established on first message exchange
Both migration paths route through the
SignalStoreCache, ensuring they see the latest in-memory state. The proactive migration runs when a LID mapping is first discovered and flushes to the backend afterward. The on-the-fly migration handles the case where the database already contains stale PN sessions from before the mapping was known.src/client/lid_pn.rs, src/client/sessions.rs, src/message.rs
Sender Keys (Group Encryption)
Groups use the Sender Key protocol for efficient multi-recipient encryption.Sender key address normalization
Sender key records are keyed by a compositeSenderKeyName containing the group JID and a sender protocol address string. WhatsApp delivers group stanzas with inconsistent sender addressing — the pkmsg (which carries the SKDM) arrives with a device-qualified participant JID (e.g., 100000000000001.1:75@lid), while the skmsg (the actual encrypted group message) arrives with a bare participant JID (e.g., 100000000000001.1@lid).
Without normalization, the sender key would be stored under the device-qualified address during SKDM processing but looked up under the bare address during skmsg decryption, causing NoSenderKeyState failures.
The client normalizes the sender JID to its bare form using to_non_ad() (which strips the device component, setting device = 0, agent = 0) at every point where a SenderKeyName is constructed. The SenderKeyName::from_jid() convenience method handles the to_string() conversion automatically:
SenderKeyName::from_jid() is equivalent to SenderKeyName::new(group_jid.to_string(), sender_address.to_string()) but avoids the manual to_string() calls and is the preferred constructor.
This ensures the cache key is always in the form "{group}:{bare_user}@{server}.0", regardless of whether the original stanza used a device-qualified or bare JID.
Location: src/message.rs, wacore/libsignal/src/store/sender_key_name.rs, wacore/binary/src/jid.rs (to_non_ad())
Sender Key Distribution
Each participant generates and distributes a sender key:- Chain ID: Random 31-bit identifier for this sender key session
- Iteration: Message counter (starts at 0)
- Chain Key: 32-byte seed for deriving message keys
- Signing Key: Ed25519 public key for message authentication
Group Encryption
Messages are encrypted with the sender’s current chain key:- Load sender key state for the group
- Derive message keys from current chain key
- Encrypt with AES-256-CBC
- Sign message with Ed25519 private key
- Advance chain key
Group Decryption
Recipients decrypt using the sender’s distributed key:- Parse SenderKeyMessage
- Look up sender key state by chain ID
- Verify Ed25519 signature
- Derive message keys for iteration (handling out-of-order)
- Decrypt with AES-256-CBC
Unknown device detection
During group message decryption, the client checks whether the sender’s device is present in the local device registry viais_from_known_device(). This detection triggers in two places within the group message processing path:
- After successful
skmsgdecrypt — if the sender device is not in the registry, the decrypted message is still processed and delivered normally. Signal decryption success already proves the sender holds a valid session key, so discarding the message would only add latency via an unnecessary retry round-trip. A background device sync is triggered to update the local device registry. - After a
NoSenderKeyStateerror — if the sender device is unknown, the retry reason is upgraded fromNoSessiontoUnknownCompanionNoPrekey
- Online: the client immediately invalidates the cached device registry for the user and fires a background usync request to refresh the device list
- Offline (during offline sync): the unknown device’s user JID is batched into a
PendingDeviceSyncset, which is flushed after offline sync completes (see Deferred device sync)
src/message.rs, src/client/device_registry.rs, src/pending_device_sync.rs
Immutable sender key loading
TheSenderKeyStore trait’s load_sender_key method takes &self (not &mut self), allowing sender key lookups to proceed under a read lock. This is safe because loading a sender key is a pure read operation — no state is mutated. The store_sender_key method still requires &mut self since it modifies state.
This means concurrent group decryptions for different senders can load sender keys in parallel without contention, while writes (SKDM processing) still serialize correctly.
If you implement
SenderKeyStore for a custom backend, load_sender_key must use &self (immutable reference). Implementations that previously required &mut self for internal caching should use interior mutability (e.g., Mutex or RwLock) instead.Sender key existence check
Before distributing sender keys, the group message path checks whether the local sender key already exists. This check uses theSignalStoreCache with a read lock (get_sender_key()), matching the status broadcast path. This avoids acquiring a write lock and prevents unnecessary SKDM re-distribution on every group send.
Per-device sender key tracking
To avoid resending Sender Key Distribution Messages on every group message, the client tracks sender key distribution status per device for each group. This uses a unifiedsender_key_devices table (see Storage - ProtocolStore) that matches WhatsApp Web’s participant.senderKey Map<deviceJid, boolean> model — a single boolean per device per group indicating whether that device has a valid sender key (true) or needs fresh SKDM distribution (false).
The tracking update is deferred until after the server acknowledges the message stanza. This matches WhatsApp Web’s behavior where markHasSenderKey() is only called after the server confirms receipt.
Why deferred? If the tracking were updated immediately after building the stanza (but before sending), a network failure between stanza build and send would leave stale entries — devices would be marked as having the sender key when they never actually received it. Subsequent messages would skip SKDM for those devices, causing decryption failures.
PreparedGroupStanza return value:
prepare_group_stanza returns a PreparedGroupStanza struct containing the stanza node and a skdm_devices: Vec<Jid> field listing exactly which devices received SKDM in this stanza. This eliminates the need for callers to re-resolve devices after sending, closing a race window where the device list could change between stanza preparation and post-ACK tracking update.
- Group path: After
send_node()succeeds, the caller uses theskdm_deviceslist fromPreparedGroupStanzato callset_sender_key_status(group, devices, true). No re-resolution needed. - Status path: A late-init boolean tracks whether full distribution occurred. The sender key tracking is only updated after the status stanza is successfully sent.
- Error recovery: If
prepare_group_stanzafails withNoSenderKeyState, all sender key device tracking for that group is cleared and the send is retried with full distribution. - Sender key rotation: On
rotateKey, the Signal sender key is also deleted for forward secrecy (matching WhatsApp Web’sdeleteGroupSenderKeyInfo), and all device tracking is cleared viaclear_sender_key_devices.
- Loads the per-device sender key map — first checking the in-memory cache, falling back to the database via
get_sender_key_devices - Resolves all current group participant devices
- Computes the diff — only devices with
has_key=falseor not yet tracked receive the SKDM - Passes the targeted device list to
prepare_group_stanzavia theskdm_target_devicesparameter
src/send.rs, src/client/sender_keys.rs, wacore/src/send.rs
In-memory sender key device cache
TheSenderKeyDeviceCache provides an in-memory caching layer over the per-device sender key tracking data stored in the database. Without this cache, every group send would require a database round-trip to load the sender key device map — the cache eliminates that overhead after the first load for each group.
- Time-to-idle eviction: The cache uses TTI semantics (default: 1 hour, 500 entries), so entries for inactive groups are automatically evicted while frequently-used groups stay cached
- Pre-parsed, pre-indexed maps: Database rows are parsed into a
SenderKeyDeviceMapstruct that provides O(1) lookups by user and device ID, avoiding per-query string parsing - Single-flight initialization: The
get_or_initmethod uses moka’s built-in coalescing — if multiple concurrent group sends for the same group trigger a cache miss simultaneously, only one database read executes and all callers share the result - Explicit invalidation: The cache is invalidated when sender key state changes (rotation, error recovery, retry failures) so stale data is never served
SenderKeyDeviceMap structure:
The SenderKeyDeviceMap pre-parses JID strings from the database into a user-to-devices HashMap for efficient lookup:
| Event | Action |
|---|---|
Sender key rotation (rotateKey) | Invalidate group entry |
NoSenderKeyState error during send | Invalidate group entry |
| Retry failure for a group message | Invalidate group entry |
| Server rejects group stanza | Invalidate group entry |
New device added (patch_device_add) | Invalidate all entries |
Device removed (patch_device_remove) | Invalidate all entries |
Identity change (clear_device_record) | Invalidate all entries + delete status@broadcast sender key |
sender_key_devices_cache field in CacheConfig.
Location: src/sender_key_device_cache.rs, src/send.rs
Phash validation for stale device list detection
When sending group, status, or DM messages, the library validates the participant hash (phash) returned in the server’s acknowledgment against the locally computed phash. A mismatch indicates that the server’s view of participant devices differs from the client’s — meaning the local device list is stale.
How it works:
- Before sending, the client obtains the locally computed
phash— from the stanzaphashattribute for group/status messages, or fromPreparedDmStanza.phashfor DMs - A oneshot ack waiter is registered for the message ID via
register_ack_waiter - The message stanza is sent to the server
- A background task (
spawn_phash_validation) awaits the server’s ack (with a 10-second timeout) - The server’s ack includes its own
phash— if it differs from the local value, the client invalidates caches
| Send path | Sender key device cache | Group info cache | Device registry |
|---|---|---|---|
| Group messages | Invalidated | Invalidated | — |
| Status messages | Invalidated | Not invalidated | — |
| DM messages | — | — | Recipient + own PN devices invalidated |
syncDeviceListJob([recipient, me])). On mismatch, the client invalidates the device registry cache for both the recipient’s user JID and your own phone number (PN) JID, ensuring the next send re-fetches the current device list for both parties.
The phash validation runs asynchronously in the background and does not block the send path. If the server ack times out (after 10 seconds) or the oneshot channel is dropped, the validation is silently skipped. This matches WhatsApp Web’s approach of using phash as a best-effort staleness detector rather than a hard requirement.
src/send.rs, src/client.rs
Cryptographic Primitives
AES-256-CBC (Message Content)
Used for encrypting message bodies in both 1:1 and group messages:wacore/libsignal/src/crypto/aes_cbc.rs
Thread-Local Buffers
The implementation uses thread-local buffers to reduce allocations:wacore/libsignal/src/protocol/session_cipher.rs:14-54
HKDF-SHA256
Used for key derivation in session initialization:wacore/libsignal/src/protocol/ratchet.rs:18-39
PreKey Management
Pre-keys enable asynchronous session establishment in the Signal Protocol. whatsapp-rust manages pre-key generation and upload to match WhatsApp Web’s behavior.Configuration
- WANTED_PRE_KEY_COUNT (812): Number of pre-keys uploaded in each batch, matching WhatsApp Web
- MIN_PRE_KEY_COUNT (5): Minimum server-side pre-key count before triggering upload
Pre-key ID counter and wrap-around
Pre-key IDs use a persistent monotonic counter (Device::next_pre_key_id) that only increases, matching WhatsApp Web’s NEXT_PK_ID pattern:
If the counter wraps while unconsumed high-ID pre-keys still exist in the store, the database upsert (
ON CONFLICT DO UPDATE) silently overwrites them. This is an accepted trade-off because the server consumes keys well before a full 16M cycle completes.src/prekeys.rs
Force-refreshing pre-keys for device migration
When migrating a device from an external source (e.g., a Baileys session into anInMemoryBackend), the server may still hold pre-key IDs whose private key material you cannot reconstruct. Any pkmsg referencing those IDs will fail permanently with InvalidPreKeyId.
The public refresh_pre_keys() method force-uploads a fresh batch of 812 pre-keys, giving the server new IDs the caller has locally. Old unmatched IDs drain naturally as peers consume them.
prekey_upload_lock to prevent races with the count-based and digest-repair upload paths, then calls upload_pre_keys_with_retry(force: true) which uses Fibonacci backoff (1s, 2s, 3s, 5s, 8s, … capped at 610s).
Location: src/prekeys.rs:263-266
Digest key validation
After connection, the client validates that the server’s copy of the key bundle matches local keys. This matches WhatsApp Web’sWAWebDigestKeyJob.digestKey() flow.
Wire format:
- Query the server for the key bundle digest via
DigestKeyBundleSpec - If the server returns 404 (no record), trigger a full pre-key re-upload
- If the server returns 406/503 or other errors, log and skip
- On success, compare registration IDs
- Load each pre-key referenced by the server and extract its public key
- Compute a local SHA-1 digest over: identity public key + signed pre-key public + signed pre-key signature + all pre-key public keys
- Compare the local hash against the server-provided hash
The
<list> node contains <id> children (not <key> children). The parser iterates all children of <list> without tag filtering, matching WhatsApp Web’s mapChildren behavior which does not filter by tag name.src/prekeys.rs:218-344, wacore/src/iq/prekeys.rs:170-302
Storage Integration
whatsapp-rust integrates Signal Protocol storage through a layered architecture:Device struct implements the libsignal SessionStore, IdentityKeyStore, and other traits. These are wrapped by SignalProtocolStoreAdapter, which adds the SignalStoreCache layer — sessions are cached as SessionRecord objects (not bytes), with serialization deferred to flush().
Each store (sessions, identities, sender keys) is flushed independently under its own lock. Only one store is locked during its I/O — the other two remain free for concurrent encrypt/decrypt operations. The lock is held from snapshot through write through clear, so mutations to the same store are blocked until flush completes, preventing dirty-set races:
Security Considerations
Identity Key Trust
The implementation verifies identity keys before encryption/decryption:wacore/libsignal/src/protocol/session_cipher.rs:160-172
Duplicate Message Detection
The protocol detects and rejects duplicate messages:wacore/libsignal/src/protocol/session_cipher.rs:822-827
Log level discipline
The protocol layer follows strict rules about what cryptographic material appears in logs and at which level:- No private keys or secrets are ever logged —
ChainKey,MessageKeys, andRootKeytypes do not expose their key bytes through logging - Public keys appear only at
warn/errorlevels — and only when something has gone wrong (untrusted identity, MAC failure) - MAC key fingerprints are truncated — only the first 4 bytes (8 hex chars) are logged during MAC verification failures, not the full key:
- Ratchet keys in debug logs — successful decryptions log the sender ratchet public key (never private) at
debuglevel for diagnostics - Pre-key operations use
debugfor routine operations andwarn/infofor exceptional conditions
The Signal protocol layer (
wacore/libsignal/src/protocol/) uses no trace!-level logging. Sensitive operations stay at debug or above to avoid leaking material in verbose log configurations.Session state corruption
Detailed logging helps diagnose crypto failures:- All attempted session states
- Receiver chain information
- Message metadata (sender ratchet key, counter)
wacore/libsignal/src/protocol/session_cipher.rs:365-454
Protocol safety limits
The implementation enforces several hard limits to prevent resource exhaustion and cryptographic failures:| Constant | Value | Purpose |
|---|---|---|
MAX_PREKEY_ID | 16,777,215 (2^24 − 1) | Maximum valid pre-key ID (24-bit wire format) |
MAX_FORWARD_JUMPS | 25,000 | Maximum message skip in a ratchet chain |
MAX_MESSAGE_KEYS | 2,000 | Maximum cached out-of-order message keys per chain |
MAX_RECEIVER_CHAINS | 5 | Maximum receiver chains per session |
ARCHIVED_STATES_MAX_LENGTH | 40 | Maximum archived session states |
MAX_SENDER_KEY_STATES | 5 | Maximum sender key states per group |
MESSAGE_KEY_PRUNE_THRESHOLD | 50 | Amortized eviction trigger for old message keys |
| Chain key index | u32::MAX | Overflow returns InvalidState error (not silent wrap) |
wacore/libsignal/src/protocol/consts.rs
Performance optimizations
Session object cache
TheSignalStoreCache stores sessions and sender keys as deserialized objects (SessionRecord and SenderKeyRecord) rather than serialized bytes, matching WhatsApp Web’s architecture where the JS object IS the cache. Serialization only happens during flush() to the database — not on every store_session or put_sender_key call.
store_session method takes SessionRecord by value, enabling zero-cost moves from the protocol layer:
message_encrypt, message_decrypt_signal, message_decrypt_prekey, process_prekey_bundle) drop the record immediately after storing. Taking ownership eliminates the .clone() in the adapter and the compiler enforces no use-after-store.
Per-message hot path impact:
| Operation | Before | After |
|---|---|---|
store_session | clone all fields + prost encode | move (zero-cost) |
load_session | prost decode + construct | clone current session only (previous_sessions O(1) via Arc) |
store_sender_key | serialize to bytes + store bytes | store SenderKeyRecord object directly |
load_sender_key (&self) | load bytes + deserialize | return cached SenderKeyRecord object (read lock only) |
flush (batched) | write bytes to DB | serialize sessions + sender keys + write bytes to DB |
Arc previous sessions
SessionRecord.previous_sessions is wrapped in Arc<Vec<SessionStructure>>, making clone O(1) for the ~40 archived previous sessions that previously accounted for ~40% of the serialize cost:
Arc::make_mut and a deep copy.
Redundant signal store write elimination
TheSignalStoreCache uses targeted deduplication strategies per store type. For identities (which rarely change), put_dedup() compares incoming bytes against the cached value and skips if identical:
put() since they change with every message — dedup would always fail and waste CPU cycles. This split avoids unnecessary database writes during flush() while not adding overhead where it provides no benefit.
Key reuse in cache
Thekey_for() method on SessionStoreState, SenderKeyStoreState, and ByteStoreState reuses existing Arc<str> keys from the HashMap via get_key_value(), avoiding a heap allocation on every cache operation:
Single-Allocation Session Lock Keys
Session lock keys use the full Signal protocol address string (e.g.,5511999887766@c.us.0). The JidExt trait provides methods for generating these strings, defined in wacore/src/types/jid.rs:
to_protocol_address_string() is used on hot paths (message encryption and decryption) as the key for session_locks. It pre-sizes the output buffer and builds the string in a single allocation, avoiding the two-allocation overhead of constructing a ProtocolAddress and then calling .to_string().
The write_protocol_address_to() free function provides the same formatting but writes into a caller-supplied &mut String buffer, enabling buffer reuse across multiple JIDs (used by session_mutexes_for()).
Format examples:
| JID | Signal address | Protocol address string |
|---|---|---|
5511999887766@s.whatsapp.net | 5511999887766@c.us | 5511999887766@c.us.0 |
5511999887766:33@s.whatsapp.net | 5511999887766:33@c.us | 5511999887766:33@c.us.0 |
123456789@lid | 123456789@lid | 123456789@lid.0 |
123456789:33@lid | 123456789:33@lid | 123456789:33@lid.0 |
The server
s.whatsapp.net is mapped to c.us in address strings, matching WhatsApp Web’s internal format. The trailing .0 is the Signal device_id (always 0 in WhatsApp’s usage).WAWebSendUserMsgJob behavior where the local device table is read on the send path, and WAWebDBDeviceListFanout filters out hosted devices. The client checks the local device registry first (via get_devices_from_registry()); a network fetch is only triggered on a cache miss to avoid unnecessary LID-migration side effects from get_user_devices. The sender device is excluded (matching WA Web’s isMeDevice in getFanOutList), and for self-DMs, overlapping device lists are deduplicated using a HashSet (matching WA Web’s Map keyed by toString).
Own companion devices (your other linked devices) receive per-device encryption for multi-device self-sync via DeviceSentMessage.
WA Web has a bare-
<enc> fast path for single primary device (WAWebSendMsgCreateFanoutStanza). This is not implemented in whatsapp-rust because encrypt_for_devices always wraps in <to jid=...> nodes. The <participants> form is accepted by the server regardless.build_session_lock_keys() helper resolves encryption JIDs and sorts them for deadlock-free lock acquisition:
- Resolves the recipient to its bare encryption JID via
resolve_encryption_jid().to_non_ad()(stripping device component) - Resolves own companion device JIDs
- Sorts by
(server, user, device)usingcmp_for_lock_order()and deduplicates - Returns sorted
Vec<Jid>— no intermediateStringallocations needed for sorting
session_mutexes_for() helper then converts sorted JIDs to session mutexes, reusing a single String buffer via write_protocol_address_to() to avoid per-JID heap allocations:
100000012345678@lid.0), matching the decrypt path’s lock format. This ensures send and receive paths serialize on the exact same lock key.
Location: wacore/src/types/jid.rs:4-51, src/send.rs:1481-1507
Single-buffer ProtocolAddress
TheProtocolAddress struct stores the full address string "{name}.{device_id}" in a single String buffer, with a name_len marker to split name from suffix. This halves the allocation count compared to storing name and device ID separately, and eliminates the copy when rewriting the address via reset_with().
name() and as_str() are zero-cost slices into the same buffer — no allocations on access.
Reusable hot-loop address construction
When iterating over many devices (e.g., during group stanza preparation or session resolution), allocating a freshProtocolAddress per device is wasteful. The JidExt trait provides reset_protocol_address() to rewrite a pre-allocated address in place, and make_reusable_protocol_address() creates the initial buffer:
String allocation per device in the loop. For a group with 100 participant devices, that saves 100 heap allocations on the send path. The pre-allocated capacity of 64 bytes covers all known WhatsApp address formats without reallocation.
Use
to_protocol_address() for one-shot address construction (e.g., cache keys, single lookups). Use make_reusable_protocol_address() + reset_protocol_address() when iterating over multiple JIDs in a tight loop.wacore/libsignal/src/core/address.rs, wacore/src/types/jid.rs, wacore/src/send.rs
Zero-Allocation JID Deduplication
Group stanza preparation needs to deduplicate participant JIDs at two stages: before device resolution (by user identity) and after LID conversion (by device identity). Two utility functions inwacore/src/types/jid.rs handle this with in-place sorted dedup instead of HashSet allocations:
sort_unstable_by followed by dedup_by, comparing JID fields directly without allocating intermediate strings or hash sets. This is more efficient than the HashSet<(String, String)> approach because:
- No per-JID
String::clone()for hash keys - No
HashSetallocation or hashing overhead - Stable dedup order (sorted) instead of hash-dependent iteration
wacore/src/send.rs):
wacore/src/types/jid.rs:33-51
Take/Restore Pattern
Avoids cloning session states during decryption attempts:wacore/libsignal/src/protocol/session_cipher.rs:495-564
Buffer Reuse
Thread-local buffers eliminate per-message allocations:wacore/libsignal/src/protocol/session_cipher.rs:20-54
Public API
Theclient.signal() accessor exposes low-level Signal protocol operations for direct use. This includes 1:1 and group encryption/decryption, session validation, session deletion, participant node creation, and device resolution.
See Signal API reference for full method documentation and examples.
Related Components
- Binary Protocol - How encrypted messages are serialized
- State Management - How session state is persisted
- WebSocket Handling - Transport layer for encrypted messages
- Signal API - Public API for Signal protocol operations
References
- Signal Protocol Specification
- libsignal Repository
- Source:
wacore/libsignal/src/protocol/ - Storage:
src/store/signal.rs,src/store/signal_adapter.rs,wacore/src/store/signal_cache.rs