Consider whether re-requesting all URLs over HTTPS is safe#21
Consider whether re-requesting all URLs over HTTPS is safe#21
Conversation
It is possible that issuing a request to a resource could pose a security threat. To mitigate this concern, the secure variant checks will only request against the root domain. The path and query vars are stripped off before making the request.
I think this is sufficient. Interestingly, this issue reminds me that for Chrome and Safari, the "blocked-uri" does not contain path or query string information if the URI is different than the document URI (e.g., when loading domain.com, any violations from vimeo.com will not contain paths or query strings); however, if the blocked URI comes from the document URI, it contains paths and query strings (e.g., when loading domain.com/evil.js from domain.com, the full path and query string is passed). With Firefox, the full path and query string is always passed #fml. I say all of that to suggest that for the most part, the secure check was against the domain only. Firefox does not support the admin/auth mode (Firefox does not pass the cookies with the request) so the only way that people would get the path and query string info is if they turned on sampling mode. Either way, I have removed the path and query string info from the request. I'd appreciate any thoughts you have on this PR. |
In the beacon handler, the blocked URI is re-requested over HTTPS server-side in an attempt to see if it resolves (see
mcd_uri_has_secure_version()). This then sets a flag on the violation report stating whether a secure version of the URI is available.For scripts, styles, images, etc, this is probably fine, but I wonder whether it could be a concern to re-request other URIs such as iframe URLs (which are more likely to contain sensitive parameters) or GET XHR requests (which might cause problems if they're repeated).
Some options: