Greetings!
A friend of mine wants to be more secure and private in light of recent events in the USA.
They originally told me they were going to use telegram, in which I explained how Telegram is considered compromised, and Signal is far more secure to use.
But they want more detailed explanations then what I provided verbally. Please help me explain things better to them! āØ
I am going to forward this thread to them, so they can see all your responses! And if you can, please cite!
Thank you! āØ
How?
If I share an IP with 100 million other Signal users and I send a sealed sender message, how does Signal distinguish between me and the other 100 million users? My sender certificate is encrypted and only able to be decrypted by the recipient.
If Iām the only user with my IP address, then sure, Signal could identify me. I can use a VPN or similar technology if Iām concerned about this, of course. Signal doesnāt consider obscuring IPs to be in scope for their mission - there was a recent Cloudflare vulnerability that impacted Signal where they mentioned this. FromĀ https://www.404media.co/cloudflare-issue-can-leak-chat-app-users-broad-location/
ā¦
I saw a post about this recently on Lemmy (and Reddit), so thereās probably more discussion there.
What do you mean when you say āconversationā here? Do you mean when you first access a userās profile key, which is required to send a sealed sender message to them if they havenāt enabled āAllow From Anyoneā in their settings? If so, then yes, the senderās identity when requesting the contact would necessarily be exposed. If the recipient has that option enabled, thatās not necessarily true, but I donāt know for sure.
Even if we trust Signal, with Sealed Sender, without any sort of random delay in message delivery, a nation-state level adversary could observe inbound and outbound network activity and derive high confidence information about whoās contacting whom.
All of that said, my understanding is that contact discovery is a bigger vulnerability than Sealed Sender if we donāt trust Signalās servers. Hereās theĀ blog post from 2017Ā where Moxie describe their approach. (See alsoĀ this blog postĀ where they talk about improvements to āOblivious RAM,ā though it doesnāt have more information on SGX.) He basically said āThis solution isnāt great if you donāt trust that the servers are running verified code.ā
He then continued on to describe their use of SGX and remote attestation over a network, which was touched on in the Sealed Sender post. Specifically:
Later in that blog post, Moxie says āThe enclave code builds reproducibly, so anyone can verify that the published source code corresponds to the MRENCLAVE value of the remote enclave.ā But how do we actually perform this remote attestation? And is it as secure and reliable as Signal attests?
In the docs for the āauditeeā application, theĀ Examples pageĀ provides some additional information and describes how to use their tool to verify the MRENCLAVE value. Note that they also say that the tool is a work in progress and shouldnāt be trusted. The Intel SGX documentation likely has information as well, but most of the links that I found were dead, so I didnāt investigate further.
A blog post titledĀ Enhancing trust for SGX enclavesĀ raised some concerns with SGXās current implementation, specifically mentioning Signalās usage, and suggested (and implemented) some improvements.
I havenāt personally verified the MRENCLAVE values for any of Signalās services and Iām not aware of anyone who has (successfully, at least), but I also havenāt seen any security experts stating that the technology is unsound or doesnāt actually do whatās claimed.
Finally, I recommend you check outĀ https://community.signalusers.org/t/overview-of-third-party-security-audits/13243Ā - some of the issues noted there involve the social graph and at least one involves Sealed Sender specifically (though the link is dead; I didnāt check to see if the Internet Archive has a backup).
deleted by creator
Thatās already not very likely, but ignoring IP, youāre the only one with your SSL keys. As part of authentication, you are identified. All the information about your device is transmitted. Then you stop identifying yourself in future messages, but your SSL keys tie your messages together. They are discarded once the message is decrypted by the server, so your messages should in theory be anonymised in the case of a leak to a third party. That seems to be what sealed sender is designed for, but it isnāt what Iām concerned about.
Right, but itās not other users Iām scared of. Signal also has my exit node.
I mean if strangers can find my city on the secret chat app I find that quite alarming. The example isnāt that coarse, and Signal, being a centralised platform with 100% locked down strict access, they well could defend users against this.
When their keys are refreshed. I donāt know how often. I meant a conversation as people understand it, not first time contact. My quick internet search says that the maximum age for profile keys is 30 days, but I would imagine in practice itās more often.
That is true, but no reason to cut Signal slack. If either party is in another country or on a VPN, then thatās a mitigating factor against monitoring the whole network. But then if Signal is sharing their data with that adversary, then the VPN or being in a different country factors has been defeated.
I appreciate the blog post and information. I donāt trust them to only run the published server code. Itās too juicy of an honeypot.
I donāt have any comment on SGX here. Itās one of those things where thereās so many moving parts and so much secret information, and so much you have to understand and trust that it basically becomes impossible to verify or even put trust in someone who claims to have verified it. Sometimes itās an inappropriate position, but I think itās fine here: Signal doesnāt offer me anything, I have no reason to put so much effort into understanding what can be verified with SGX.
And thanks for the audits archive.
Why do you think that Signal uses SSL client keys or that it transmits unique information about your device? Do you have a source for that or is it just an assumption?
No, thatās just an assumption. Itās very standard. But they do, this is the code for it. https://github.com/signalapp/Signal-Android/blob/main/app/src/main/java/org/conscrypt/ConscryptSignal.java
That doesnāt confirm they send anything extra about your device, thatās an assumption as well.
Iām familiar with SSL in the context of webdev, where SSL (well, TLS) is standard, but there the standard only uses server certificates. Even as a best practice, consumer use cases for client certificates, where each client has a unique certificate, are extremely rare. In an app, I would assume thatās equally true, but that shared client certificates - where every install from Google Play uses the same certificate, possibly rotated from version to version, and likewise with other platforms, like the App Store, the apk you can download from their site, F-Droid, if they were on it, and releases of other apps that use the same servers, like Molly. Other platforms might share the same key or have different keys, but in either case, theyāre shared among millions of users.
Iām not sure Signal does have a client certificate, but I believe they do have a shared API access key that isnāt part of the source code, and which they (at least previously) prohibited the use of by FOSS forks (and refused to grant them their own key)
That said, I reviewed that code, and while Iām not a big fan of Java and Iām not familiar with the Android APIs, Iām familiar with TLS connections in webdev, the terms are pretty similar cross-language, and I did work in Java for about five years, but I didnāt see anything when reviewing that file that makes me think client certificates are being generated or used. Can you elaborate on what Iām missing?