Responses to being wrong on the internet

Gushi
9 min readJun 5, 2023

So, in response to my last post, a friend of mine who moves in the infosec world left me a wall-o-text on FaceBook.

And, rather than start a flame war, I’m simply documenting my responses here.

His response:

I think you and I have fundamentally different approaches to the Internet and associated technologies, but I have to say that you’re doing it wrong. I’ve been at this for a very long time, and I followed all of the changes to CSP, TLS, and participate in a number of IETF discussion groups where we set and build these standards. I get your frustration, but it feels like a lot of it is coming out of not having done the research.

You don’t put sites into full-on CSP enforcing mode, day one.

You put them into report-only. You work out all of the bugs, then you flip the switch to enforcing. CSP has been pretty static since it’s release, and it is an important and valuable tool to secure sites.

You also didn’t understand why inline scripts are bad. They’re bad because if the attacker can modify your site to inline JavaScript payloads (say, via XSS) then that JS will execute.

Don’t be lazy. Stop using inline scripts, move that shit into a file, and load it separately with secure, hashed script tags.

It’s also not “impossible” to get this right. Thousands of sites have deployed CSP correctly and made it work.

It has nothing to do Web 2.0, and while you use that to cast CSP as some sort of evil and broken technology, it’s a false argument.

It has everything to do with browser security.

Do you know why non-TLS sites are not secure, and why no one should be providing content over non-TLS connections on the web? Even static ones?

Because anyone can intercept and look at what a user is looking at, and that in and of itself is a privacy violation. In some cases if the content is controversial or specific or even sexual, it could be used against the user later.

The Wordpress comment? It’s a favorite thing to beat on for the “old guard.”

Wordpress is inherently insecure because it’s got years of bad plugins associated with it and it’s written in an ancient language (PHP). However there are plenty of sites that will let you run Wordpress and convert it on the fly to an all static, all TLS protected site (see Strattic)

It’s just rapidly becoming a false narrative now.

So let’s get into this:

You don’t put sites into full-on CSP enforcing mode, day one.

You put them into report-only. You work out all of the bugs, then you flip the switch to enforcing. CSP has been pretty static since it’s release, and it is an important and valuable tool to secure sites.

Unless it’s an old static site that’s only of interest to you. Or you had read in the previous post that I had done that previously. This site is a toy that I use to understand things better.

As previously mentioned (in that article, go look), I’ve already developed things fully statically with Hugo, and this was a last ditch effort to see what the pain-level was to making this work with the existing site.

But yes, I’d been using report-only the whole time.

And some of those things will cause you to get reports sent to your report-url. And some others will trip errors in the browser, but after reading on several canonical sites what should be supported in which browsers and finding it to be wrong about this very header, the only way to know is try it with the real thing.

You also didn’t understand why inline scripts are bad. They’re bad because if the attacker can modify your site to inline JavaScript payloads (say, via XSS) then that JS will execute.

Again, the javascript we’re talking about is this comes-standard-with-dreamweaver garbage.

<script language="JavaScript" type="text/javascript">
<!--
function MM_swapImgRestore() { //v3.0
var i,x,a=document.MM_sr; for(i=0;a&&i<a.length&&(x=a[i])&&x.oSrc;i++) x.src=x.oSrc;
}

function MM_preloadImages() { //v3.0
var d=document; if(d.images){ if(!d.MM_p) d.MM_p=new Array();
var i,j=d.MM_p.length,a=MM_preloadImages.arguments; for(i=0; i<a.length; i++)
if (a[i].indexOf("#")!=0){ d.MM_p[j]=new Image; d.MM_p[j++].src=a[i];}}
}

function MM_findObj(n, d) { //v3.0
var p,i,x; if(!d) d=document; if((p=n.indexOf("?"))>0&&parent.frames.length) {
d=parent.frames[n.substring(p+1)].document; n=n.substring(0,p);}
if(!(x=d[n])&&d.all) x=d.all[n]; for (i=0;!x&&i<d.forms.length;i++) x=d.forms[i][n];
for(i=0;!x&&d.layers&&i<d.layers.length;i++) x=MM_findObj(n,d.layers[i].document); return x;
}

function MM_swapImage() { //v3.0
var i,j=0,x,a=MM_swapImage.arguments; document.MM_sr=new Array; for(i=0;i<(a.length-2);i+=3)
if ((x=MM_findObj(a[i]))!=null){document.MM_sr[j++]=x; if(!x.oSrc) x.oSrc=x.src; x.src=a[i+2];}
}
//-->
</script>

It exists because this is how mouseovers were done in those days. I’m sure nothing bad has ever happened with the security of Adobe products! (that’s a joke, by the way). That code, as written, doesn’t have the hooks to be easily able to add its own event handlers. Which is less secure? Code I modify myself not knowing javascript, or at least some code that’s been reviewed by people at Macromedia (for whatever that’s worth).

I could stuff that into a file (mm.js, I guess) and load it on every page (with a sha256 hash), but as mentioned previously: Pure CSS now does this. My goal was to figure out the pain points of getting no errors on an EXISTING SITE. Hashing this script and loading the content from a file would not stop the onMouseover events from throwing warnings.

And my rant was precisely that: the idea that several sites out there claim that you can override this but you cannot.

(Note: including Mozilla’s own site, as well as a site calling itself a reference on this, with a name like https://content-security-policy.com/).

That was the point. The browser wars of “let’s not change this because it only works in this browser but totally breaks in that one” are alive and well. This might work in this version but maybe not that one. And even the canonical docs do not work.

Ironically, I’m testing this on a site that’s been frames-and-dreamweaver since 2000 because at least that worked, in a time when any more complex design fucked itself sideways on IE6.

Maybe because the FB algorithm was too busy showing you ads, you missed my prior post, but I did mention it in my FB post two days ago:

And in my last post I also said:

I’m playing around with a new, all-static, no-javascript site design, and I wonder how difficult it would be to get the previous version to run with all the knobs tightened and produce no errors, before I load it onto a flaming barge and cast it off to valhalla.

You missed/ignored/misread all of that because you wanted to make good and sure you slapped down hard.

Anyway, continuing the discussion:

Don’t be lazy. Stop using inline scripts, move that shit into a file, and load it separately with secure, hashed script tags.

It’s also not “impossible” to get this right. Thousands of sites have deployed CSP correctly and made it work.

It has nothing to do Web 2.0, and while you use that to cast CSP as some sort of evil and broken technology, it’s a false argument.

It has everything to do with browser security.

Lazy’s a personal dig, and it’s a low blow.

I taught myself CSS and Flexbox in two days, and I assert that the better place to put this script is in /dev/null. How’s that for not-lazy?

My point of bashing on Web 2.0 is we treat “2.0” as some kind of polished, bug-fixed release point that the world magically shifts atomically to and everything works everywhere all at once, and it continues to be a world of shit for anything that hasn’t updated. Even if this were to work in the “Tech Preview” version of Safari, what about all the prior versions? What about someone with an older iphone? What about the users who don’t run updates because they’re afraid (with good reason) that it’ll slow their stuff down? What about the millions of un-upgradable android devices out there?

Why the hell can’t we get one goddamned US bank to support non-SMS-based 2FA? Why the hell haven’t we managed to solve callerID spoofing, even? Where’s DNSSEC deployment? How’s RPKI and BGPsec coming along? Why are we still doing credit cards with signatures, or accepting magstripes at all? It’s always going to be a shit show because every site that wants to roll this stuff out has to design their site for Grandma with her 12-year-old-imac.

I like the idea of CSP, or I simply wouldn’t be doing this, but this upgrade (again) provided me with an interesting place to play around with what was possible without a full rewrite.

Do you know why non-TLS sites are not secure, and why no one should be providing content over non-TLS connections on the web? Even static ones?

Because anyone can intercept and look at what a user is looking at, and that in and of itself is a privacy violation. In some cases if the content is controversial or specific or even sexual, it could be used against the user later.

No disagreement. I’ve been a fan of having SSL for years, but I’ve also watched as many users were forced to accept and click through bad SSL warnings on sites like GNU Savannah, because they didn’t want to pay for a real SSL cert for purely political Stallmanesque reasons. I wish dearly that there was a way to also get an SSL cert on local devices (like home routers) that was trustworthy, but we haven’t cracked that one yet, so we have another reason we’re telling clueless users to say “I accept the risk, add exception”.

I deployed a full CA for all our management machines at our dayjob, and regularly lament the fact that the state of crl checking is just plain fucked, or that the OpenSSL OCSP responder, at least until recently, was a single-threaded piece of garbage.

You can go look at my five-pages-of-no-sexual-content-that-hasnt-changed-in-twenty-years and decide, based on that, if you think it warrants SSL. I’ve also been paying for [and still pay for] a real SSL cert on my server’s hostname for 20 years for things like SMTP and IMAP. But when certs were pay-to-play things, I wasn’t about to shell out extra money for the vanity page with no user interaction.

In my case, the only reason to put a cert on there was “now browsers are throwing warnings”. I think the warnings are a good thing, to be clear, and I think there’s very specific use-cases for non-ssl sites, but that was the impetus.

These days, I do all my SSL with mod_md, which puts all the lets-encrypt interaction right there in the server, and my current lament is that the state of purchased certs means I can’t use ACME with any of them. Hell, when I go to buy an SSL cert from one of the three biggest vendors, I’m asked which software I’m running in this list.

Go ahead, find nginx or lighttpd.

So yeah, TLS is fine.

The Wordpress comment? It’s a favorite thing to beat on for the “old guard.”

Wordpress is inherently insecure because it’s got years of bad plugins associated with it and it’s written in an ancient language (PHP). However there are plenty of sites that will let you run Wordpress and convert it on the fly to an all static, all TLS protected site (see Strattic)

It’s just rapidly becoming a false narrative now.

No, it’s not. Lots of hosting companies still offer you WordPress hosting where your /admin url is exposed to the public internet, available for knocking on. And lots of hosting companies have made keeping that WordPress patched for you a value-add, but at the end of the day, it’s still awful software with a plugin ecosystem that allows for plugins or themes (with lots of vulnerable JS) that haven’t seen a release for years to be downloaded and installed and used, and there are lots of people who will sell you management of your insecure WordPress site. And they’ll all swear they’re the one platform to use.

Lots of people hire their marketing people because they know WordPress, and that’s what their site is in and always has been, and the pain of migrating away is too complicated. I’ve been at a company that went through this. After some in-house CMS, after two iterations of Drupal with several teams of consultants, after an in-house WordPress person who decided the grass was greener working at Apple.

And lots of users (remember kids, user is a four letter word!) are too afraid of breaking their themes to go press that “upgrade” button.

Ask me how I know.

--

--

Gushi

Gushi/Dan Mahoney is a sysadmin/network operator in Northern Washington, working for a global non-profit, as well as individually.