Zobrazují se příspěvky se štítkemsecurity. Zobrazit všechny příspěvky
Zobrazují se příspěvky se štítkemsecurity. Zobrazit všechny příspěvky

pondělí 18. listopadu 2024

Responsible disclosure: EteSync vulnerabilities

EteSync is a software for end-to-end encrypted data synchronization. The idea is great. The implementation might be decent, but not perfect. I’ve discovered some vulnerabilities, reported them, but they weren’t fixed within the 90-day deadline. At the time of publishing, I saw no indication of fixes coming in a near future, so I decided to publish the vulnerabilities in order to inform users. After publishing this report, the vulnerabilities got fixed and a new version of EteSync DAV bridge was released.

What is affected?

EteSync consists of multiple components, e.g. Android app, iOS app, DAV bridge, server etc. The vulnerabilities directly affect just the DAV bridge. However, even if you don’t use EteSync DAV bridge, you might be somewhat affected:

  • While I don’t think the DAV bridge is used as a component of some other app, I am not 100% sure. Update: developer has confirmed that other apps aren't affected.
  • More importantly, the reaction raises questions how would be future reports processed. While I’d like to use EteSync, I can’t recommend it until I see more active development of EteSync or its fork.

The vulnerabilities

While none of the vulnerabilities look horrible on its own, they can be chained together. When an attacker knows your username and persuades you to open malicious website (without any extra permissions), they can connect to etesync-dav (even if it listens just on localhost) and extract sensitive data. Modification of the data wasn’t tested, but it might be possible. Potential techniques that would allow the attacker to guess the username haven’t been much investigated.

1. Incorrect password validation

When you log into EteSync DAV bridge, you get another password, which allows you to access the DAV endpoints and the EteSync DAV bridge web interface. This password is different from the encryption password and is only meaningful when interacting with the bridge. While the DAV endpoints seem to validate the password, the web interface actually accepts any password, which consequentially breaks the security of DAV endpoints (see below).

I’ve briefly looked at the implementation. It seems that the bridge validates the password against the EteSync server. This is wrong, because the password should be validated locally and EteSync server doesn’t care about it. Unfortunately, the endpoint ignores the credentials and returns a 200-ish response, which makes the EteSync DAV bridge consider the credentials as valid. My initial understanding of what happens was probably wrong, I'll investigate it later.

While the DAV endpoints seem to validate the password properly, they aren’t secure, though. After the attacker logs in to the web interface with any password, the web interface provides them the correct password that can be used with the DAV endpoints.

2. DNS rebinding AKA “But it listens just on localhost, doesn’t it?”

Unfortunately, even if your instance of EteSync DAV bridge listens just on localhost, it doesn’t mean it is properly protected. Web is a complex thing and there are techniques that allow you accessing some server indirectly through a confused deputy. In this case, attacker can abuse victim’s web browser as a proxy to EteSync DAV bridge through DNS rebinding attack.

Fixing the vulnerabilities

Fixing shouldn’t be hard.

  • DNS rebinding attack can be resolved by proper Host header validation. Unrecognized non-IP Host headers are a sign of a potential DNS rebinding attacks. Rejecting requests with bad Host headers (and maybe requests with no Host header) resolves DNS rebinding attack.
  • Password validation is already implemented for the DAV part, so we can reuse it.

However, I was unable to build etesync-dav (commit b9b23bf6fba60d42012008ba06023bccd9109c08) from source even without any modifications:

% pip install -r requirements.txt
…
ERROR: Cannot install -r requirements.txt (line 31) and itsdangerous==2.0.1 because these package versions have conflicting dependencies.

The conflict is caused by:
    The user requested itsdangerous==2.0.1
    flask 2.3.2 depends on itsdangerous>=2.1.2

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

Using venv doesn’t help, neither does using other distro (Debian instead of Fedora). Unfortunately, I have not received further guidance from the developer. I could have dug deeper and fix the dependency issue, but I wasn’t dedicated enough to do it.

Developer response

The vulnerabilities were reported according to security.txt on 14th August 2024 with deadline set to 12th November 2024, giving the developers about 90 days. (Well, rounded to 90 days.). I got the first response on 1st September 2024. At beginning, it looked promising, but the conversation faded away. The developer seems to be busy with other things. I have to admit that part of the delay was caused by me, as I wasn't very responsive in September.

After publishing the vulnerabilities, I got a reaction from EteSync developer, clarified what was unclear and the developer released fixes for those vulnerabilities.

Fixes by developer

Password validation bug is fixed. DNS rebinding is fixed partially, i.e., DAV endpoints aren't protected from DNS rebinding yet, but this shouldn't be an issue when authentication works and DAV password isn't leaked.

  • Stop using EteSync DAV until those vulnerabilities are fixed.Upgrade EteSync DAV bridge and regenerate DAV password by re-adding your account.
  • Consider stopping using other EteSync clients until there is some evidence that they are maintained. Unfortunately, I don’t have any suggestion for direct alternative. While DecSync CC + Syncthing looks like a good alternative, the development activity doesn’t look encouraging. Moreover, I am not sure about soundness of their data model, considering that the file synchronization might occur in any order, which might introduce various edge cases.

Request for developers

While I am still grateful for developing the awesome product and I understand that you are busy with other tasks, I unfortunately cannot currently recommend EteSync. Adding some other people who can properly respond the vulnerability reports and fix the vulnerabilities can make EteSync great again.

Update history

  • Fix was released
  • Developer communication
  • Understanding of the password validation bug

středa 8. srpna 2018

Cache-friendly secure connection for websites

While commenting Eric Meyer's article about issues that HTTPS bring to Africans, I found that this should be probably also posted as an article. I am discussing how to allow better caching while keeping a reasonable level of security brought by HTTPS.

We still need secure connection, even for public static sites

I still believe we should have a secure communication everywhere; I am not 100% sure if this should be the current HTTPS.

We need secure connection even for public static sites. The reason #1 is not encryption, it is authentication. We do not want infected routers / people with Wi-Fi Pineapple / malicious to ISPs / etc. to modify webpages we see. Without some kind of secure connection, they could for example inject some cryptominers or advertisments or malware. They could also modify the content of static pages to instruct people to do something dangerous, e.g., modify recommended amount of some chemicals.

Do we always want TLS?

The way we secure our communication does not have to be today's HTTPS, though. Encryption is needed just sometimes. On public static sites, it can kind of obscure what are you looking at (e.g., attacker sees you are looking at Wikipedia, but it is not clear what page), but traffic volume analysis can often distinguish between specific pages.

How to make it better?

Let's look at some options to make it better. There will be some tradeoffs to privacy, but we will not let attackers to affect traffic in an arbitrary way, as plain HTTP would allow. Thus, we would not make the user more prone to downgrade attacks than with today's HTTPS. Our main point is allowing the caches doing their jobs, maybe a better one than with the current state of the art HTTP caches can do.

Mixed content secured by SRI

First, we could sometimes achieve a reasonable level of security even with a plain HTTP. We could have loaded some images, stylesheets and even scripts over a plain HTTP, provided they are protected by subresource integrity (SRI). I have wondered why browsers consider even SRI-protected resources as a mixed content. They are protected against modification and they do not necessarily contain anything sensitive. I don't much need to hide the fact I am downloading jQuery 1.8.1… (Today, such change in browsers can be a bit more complex if it has to be compatible with older browsers with a more strict mixed content policy. It would ideally bring something like allowplain atributte, allowing usage of plain HTTP instead of HTTPS.)

Shared cache based on hashes

With SRI, we could go a bit further. Where explicitly approved by some extra header, the browser could just match the hash for caching purposes, even if it has not ever downloaded the specific URL. As a result, we would not needlessly download dozens of exactly same copies of jQuery or Bootstrap. We could download it just once and then use the cache. While this could serve as some minor side channel that reveals information what files are already in your cache, explicit approval through some header can make it a non-issue.

Serving signed responses from caching proxy

We could also have some caches of some signed (but probably unencrypted) data. This however goes with some privacy tradeoff and new protocol to implement, but it does not give up data authentication. A cache server could return some data with expiration time and signature, even without contacting the upstream server. This is quite more complex, but still technically feasible. We cannot use TLS at this point, because TLS serves for transport layer, which we would like to intercept. The handshake could however start as a standard TLS handshake and continue with a different protocol:

Client: ClientHello, I am trying to connect through TLS to host example.com, there are my capabilities (ciphersuites). I am able to use caching proxy instead of standard TLS.
Caching proxy: Hey, I have some content for this server cached. See my non-expired approval from the server, signed by the private key of certificate holder. I am allowed to serve you some of the requests. Plus there is the OCSP response, so you know the server's certificate is not revoked. You see, the private key holder indicates there is nothing sensitive in the URL, you can send it to me.
Client: OK, there is the full URL: htttps://example.com/contact
Caching proxy: OK, there are the data authenticated by the server.

If client or server does not support such a feature, either just because it is not implemented or because they don't want this for a reason, no other party can force the communication to go this way instead of standard TLS.

  • Website owner agreement is needed: If the proxy does not have a signed and non-expired approval, it cannot force the client to reveal the full URL.
  • If the browser chooses not to use this way (e.g., because of user's decision), it can insist on a standard TLS handshake.
  • Standard TLS handshake can ne required for some blacklisted URLs (e.g., /api/*), POST requests or if some specific cookie is present. Those exceptions could be described in the initial approval.

Cache-friendly version?

I am, however, generally against making special cache-friendly sites, similar to past “wap” or “mobile” versions. If they have a different URL, it gets tricky to handle links. When I click a link from elsewhere, it does not necessarily point to the version I want. Also, force website owner not to use HSTS, which is probably not what we want.

Challenges

  • UX issues: Maybe just some users will want such tradeoff, while some others will not. How to allow both of them making an informed decision?
  • None of those suggestions is enough reviewed by others. Furthermore, description of signed caches is quite vague to properly review, because I have prefered to be concise. While I have some security and crypto background, I don't think this should be implemented without any review.
  • This would require multiple parties to implement it. All the ideas require some change in browser and the website. The last one also requires important modification of the webserver and proxy. But incentives to implement this can be quite low for most people with fast Internet connection. On the other hand, the SRI enhancements are not so hard (i.e., they are much easier than extending HTTPS to some TLS alternative) and can be useful even in Europe / America on mobile connections, despite there is no proxy that can speed up loading.
  • Any change in browsers is likely irelevant for people with Windows XP or something similar. On the other hand, they could be welcome anyway if their usage don't break anything.

úterý 2. června 2015

Review of Crypto library in Play! framework

I'd like to discuss the crypto library security and purposes, namely encryptAES and decryptAES methods. I find it easy to misuse the library. Moreover, recent 2.4 update changed some security properties. That is, some previously insecure usages are secure now, but also some previously secure usages are insecure since 2.4. This means users of Crypto library should consider the security impact before migrating to 2.4.

What has been changed?

The ECB mode has been replaced by CTR mode. I'll quote a misleading claim from the official documentation: The CTR mode is much more secure than the ECB mode.

The ECB mode is non-recommended in general, so this might look like a good decision at first sight. While the CTR mode can be more secure if properly used, it has some different pitfalls. Because some of these CTR mode pitfalls are not present in ECB, some previously secure code might become insecure.

There are some more changes, e.g. better entropy of key (higher effective key size). The old Crypto library uses first 16 characters of a string key (i.e. application.secret by default) as a key, which is wrong, especially when the string (application.secret) is hexadecimal (⟹ 64b effective key size) or so.

The new Crypto uses a hash function for deriving the key, which is much better. A PKDF would be even better for some purposes, but even now I don't see any significant issue with the new key derivation approach. (But it depends on usage! I'll discuss it later.)

What might be wrong for some usages?

Unlike ECB, the CTR mode is a stream cipher mode. Stream ciphers have usually two issues that are not present in ECB mode:

  1. Malleability. This one is not specific for stream ciphers, but stream ciphers are ultimately malleable. An adversary without the secret key can modify the cryptotext to mean something different. For more details on malleability, see the related Wikipedia article.
  2. Insecure when a key+IV is reused. If you, use one key with the same IV twice, some details about both plaintexts are leaked, potentially revealing both of them. See Reused key attack for more details.

The malleability can be mitigated by authenticated encryption, but Play! does not it implicitly. This would be correct for a completely new API if this was mentioned in the documentation. In Play!, the Crypto API is not completely new (so one might consider it as a BC break with some bad security implications) and the documentation even don't mention it.

The key+IV resuse attack (“keystream reuse attack”) can be mitigated by using random unpredictable IVs. The documentation is unclear about usage of IVs. It just states that both using an IV and not using an IV is supported, but it is not clear what is the default.

What else is wrong with the documentation?

I've found also some relict in documentation, ECB doc relicts. It is a minor issue: The documentation just states that some usage is insecure, although the issue is not true for CTR mode. See my comment on the related GitHub issue

What/who is the Play! Crypto library intended for?

A proper mode of operation must be selected for ensuring desired level of security for desired type of usage. There are various properties that can be considered neither good nor bad without defining the correct usage. I am also not sure if the library is intended for crypto-newbies (it is easy to use it wrong for them) or crypto-experts (they would want to choose the mode of operation themselves).

In addition to two CTR-related issues mentioned above, it is questionable if PKDF should be used. It is unneeded in some cases (e.g. if the key is application.secret), but it is welcome if you are using a potentially weak password (e.g. user password), because they slow bruteforce attacks down by some factor.

Well, I admit one can configure the mode. But I don't think that global config (i.e. play.crypto.aes.transformation config option) is a good idea. It is generally unclear what code is affected by changing this property. Is some library code affected? I don't know until I analyze all the libraries I use.

I'd like to hear answer to the question from the developers. It should be also noted in the documentation. Without it, one might assume that almost any behavior is OK.

Why do I disclose it publicly?

I respect responsible disclosure objective, but I don't think that keeping this issue private makes any sense now, especially when 2.4 is fresh. I feel it is better to warn programmers that they should think twice between 2.3 ⟶ 2.4 migration if they are suing Play! Crypto library.

Discussion

If you wish to discuss it, you should do so in the discussion thread on play-framework user group. Comments under this article are closed in order to prevent two separate discussions.