https://micahflee.com/ddosecrets-publishes-410-gb-of-heap-dumps-hacked-from-telemessages-archive-server/ - the "obvious" way to fix this is to forbid unofficial clients, which is not the software freedom perspective, but right now I have no idea whether someone I'm sending messages to is using a hacked client that's exporting everything in plaintext to an insecure cloud service and that feels like a bad thing?
Or maybe the answer is that this is a social issue rather than a technical one and I should just not be communicating with anyone I don't trust to not do that
@mjg59 You don't know whether they are using fucking Windows Recall or whatever it is called now either, so if you don't trust who you are talking to, you are fucked anyway.
@mjg59 iirc signal's official position is that forks shouldn't be used and i think there was at least one case where they threatened to enforce the "You must not (or assist others to) access, use, modify, distribute, transfer, or exploit our Services in unauthorized manners, or in ways that harm Signal, our Services, or systems." ToS clause
You're missing major amounts of context.
It may be hacky, but not a "hacked" client. The client is sending messages to an archive service for auditability as required by law.
The default Signal client isn't allowed since it doesn't tick the boxes required for the US gov.
This article provides this information: https://en.wikipedia.org/wiki/TeleMessage#Products
@mjg59 and how would you know that the other party isn't doing that once it is forbidden anyway? How would just forbidding it prevent that?
How do you know the person isn't taking screenshots of every message?
If you don't trust the recipient, nothing can be done to avoid problems.
@gbargoud There's a meaningful distinction between an actively hostile user and one who is potentially unaware that the software on their phone has a meaningfully different set of security properties than the unmodified version
@agowa338 This is what technology like the Play Identity API let you do (which, like I said, may not be a good thing! But there are technical approaches that reduce risk here)
@agowa338 this is not the day where I teach people remote attestation from scratch again, I'm sorry
@mjg59 I know how remote attestation works. And you should know that almost all of these approaches have been compromised by now.
@agowa338 And I know what's involved in circumventing them, and how unlikely that is to be done on corporate or government issued devices
@mjg59 But if nobody can supply a client that manages to circumvent it, they'll look for a client that runs the app on a server and forwards the UI to the client device.
And if that also doesn't work, then that app or service will just not be used.
If a client wants to have his messages saved to an insecure cloudservice they'll do it either way...
@agowa338 If the app or service isn't used then that's fine! I want to reduce the risk of me sending something to someone over a channel that presents itself as secure but which actually isn't
@mjg59 And I'm not sure how you would ensure that given how enterprises tend to work around exactly that. I mean I've seen how they strip E2E encryption before forwarding the emails to O365 so that they can use the filtering of O365 and forward mails to different mailboxes.
I'd expect them (if they want to use it) to just Citrix-style forward it to devices and record the session server side with OCR...
@panda No. That's what it's being used for in this context, but this client was also used by people who were not legally required to do so (company policy, for instance, rather than legal requirement). I, as someone communicating with someone else, have no idea whether or not they're using such a client, and even if the reason the plaintext is being collected is to meet legal requirements I still want to know that so I can consider what I feel comfortable sending
@shadowwwind @mjg59 presumably without a centralised list of clients and their keys the client could just lie about that though.
@mjg59 You probably know better than most others that one could use attestation to cryptographically identify the client, but is that really what we want?
I see it more as a social problem. People could be screen-recording your conversation anyway, even with the official client.
@mjg59
Basically this.
The trustworthiness of the person you are talking with is part of your threat model. You shouldn't be sharing information with someone you don't trust regardless of how exactly we can confirm their identity and ensure the channel's security.
@mjg59
Regrettably, this post gets my backing.
I mean it's essentially my repeated advice to my kids, isn't it?
"Once you send that private (picture|text|whatever), it's up to the recipient to *keep* it private."
There's rarely a good technical solution if the other end of the conversation isn't trusted.
But you know all this. I'm just lending the weight of a Random Internet Guy to the social>technical vote.
@mjg59 right, it's not a technical issue. Because:
"I have no idea whether someone I'm sending messages to ...":
- doesn't block sensitive notifications on the lock screen
- has a weak or nonexistent screenlock PIN
- linked a PC to their account and it's the family PC where the only user is "Administrator"
- forwards every attachment over MMS/SMS
- lets randos borrow their phone
- downloaded Pegasus
- is a blabber mouth
...
TLDR: Signal can't solve these problems.