RETURN $ecure;

Security, Technology and Life

Two weeks never hurt anyone.

leave a comment »

There are lots of arguments about if full disclosure to completely open channels is the correct decision.

I’m not going to go aggrandizing myself in this particular post. DarkReading and XSSed have done enough of that.

I just want to confirm my comments that I have made on other sites that I agree with Ben Cotton on the delay in the Fedora 37 release.

Written by Rodney G

10/28/2022 at 10:47 pm

Posted in Security, Technology

Tagged with ,

PowWeb passwords

with one comment

Just another rant about a remotely possible scenario. Earlier this week I had forgotten the password to the control panel of one of my sites. I went to recover the password and found out to my dismay, that you can use domains to actually change the password. I was curious so I put my domain in. I found out that my password was instantly changed to jibberish. This normally wouldn’t be a huge deal, but this host’s master ftp/sql/etc accounts are based off this same password, so if you used them for anything…they now are totally non-functional until you change your password and the scripts that use it. The password change form declares that I can’t use previous passwords(DO THEY KEEP A LOG!?). So, I can’t simply change it back and have all my stuff working again. That would be a weird DoS, eh? Write a script to automate this process and eat up all of the victims common passwords, while needing them to change a bunch of config files all the time. Sure, they can prevent it from simply creating alternate accounts for ftp and whatnot, but it’s still weird.

Written by Rodney G

05/20/2008 at 9:09 am

Posted in Security

Tagged with , ,

Enabling CSRF

with 4 comments

There was some talk on the WASC mailing list about CSRF recently, specifically with how to defeat tokens/nonce-based defenses. I have wanted to write about this for awhile but haven’t had the time. A quick rundown of the threads; people simply claimed that using XSS and other attacks to perform CSRF was the way to defeat tokens. Which while true, is also not specifically just CSRF. I’m a bit on the line here. I believe CSRF is a totally separate problem from everything else and simple tokens and captcha-like devices(not current image captchas, mind you; as per they blow.) are enough to defend against CSRF.

The real issue are other attack vectors that enable a malicious user to perform CSRF regardless of the tokens. XSS-enabled CSRF is the most common problem in this area. Forcing the user to do an arbitrary action is almost a non-issue to complete since it can be so easy. I mentioned this before during the HackerSafe era on sla.ckers and DarkReading, what matters at the end of the day is total site integrity.  Although you likely won’t ever reach 100% security, every little drop makes a big difference and further compromises the otherwise secure remainder.

On a slightly related note, I do have a few ideas to help — at the least slow down or; stop CSRF abusing worms. A site could implement an IDS-like system that would watch important site features/actions. If the activity of say, the logout button; increases dramatically, then you either have a large loss of user base for some other reason which is an issue into itself or there is a somewhat benign worm annoying your users. Or perhaps for repeatable actions(sending private messages within the site), an ‘impact’-like value that would increase each time it is done in rapid succession. If a user reaches a certain value, they are either spamming and/or propagating a worm and you should at least temporarily block the feature in question while you fix the issue.

Anyways, enough ranting! Keep an eye out. I’ll be releasing some semi-interesting stuff on worm history and future progression with specific regards to propagation and reach.

Written by Rodney G

05/2/2008 at 7:34 pm

Posted in Security

90% Exploitable – Is this progress?

leave a comment »

It’s been nearly three years since many of us estimated that 9 out of 10 sites had at least one flaw while most had more. I have not been to active in the security world as of late ( though this will change soon! ), but I would have hoped we would have made some sort of progress. It seems XSS is still amazingly pervasive and CSRF; the now waking giant, is not far behind.

As Darkreading reports, WhiteHat has issued a press release which states that around 9 of 10 sites have at least one vulnerability while the average site has around six or seven.  I rarely seen WAF’s as the solution, but even over a few years — nearly eternity for the internet, little to no progress has been obviously made. So, perhaps it is finally time. In the whitehat’s defense though, the odds are amazingly against them. Over a hundred million sites operate now. That 1 of 10 sites that is safe is often brochure-ware. A site with little or no interactivity; static html on secure servers.

Perhaps we ARE making developers more security-minded and making progress. I do remember saying this awhile back.

Many sites are vulnerable to XSS, and since all Websites change, eventually another XSS hole will probably open up on sites previously thought [of as] safe.

This seems to remain fairly true today. The very nature of interactive websites tied along with them being revamped fairly often, means that it’s all very dynamic, thus apparently; very insecure.

Oh well. At least with my inactivity as of late, I won’t be heading to an early grave.

Written by Rodney G

04/10/2008 at 1:19 am

Posted in Security, Technology

Tagged with , , ,

CSRF ramblings

with one comment

I was reading over this post by Robert Hansen of SecTheory just after reading a post of mine about Opera phone integration. It got me to thinking, specifically this part.

It will also have phone to tag support, which basically turns any numbers formatted like a phone number into a link, when it’s clicked the phone will call it. Pretty nifty stuff.

That would be some damn interesting CSRF. Take control over the browser and force the loading of the phones calling directive(e.g. callto://). You could get a person to call your costly line while they are browsing the net. Use caller ID and add them to some sort of calling list. If the phone and browser are integrated enough, perhaps even steal some other data like contacts or service provider, or even their phone number if they have their number privately listed.

As if I needed another reason to hate phones.

Written by Rodney G

02/18/2008 at 7:41 pm

Enabling Urchin

leave a comment »

Urchin, more commonly known as Google Analytics; is a web analytics software that measures many statistics and helps you to understand them by presenting the results in various ways. It’s also closely tied to Google AdWords now. But as it becomes more well known, people that are concerned about privacy and targeted advertising are blocking these services. Besides the obvious app-level content blockers, there are also HOST file edits to block the domain the Java Script file comes from.

If you run a web site and take care to study your statistics, using Google Analytics or not, you probably know that these sorts of measurements are often invaluable for site feedback. For example, if someone leaves a page as soon as they visit it, you know the page might need work in some way. So if we are reliant on Urchin, how can we assure ourselves we are reaching our entire user base?

There is an obvious solution that is also fairly easy, host your own copy of Urchin.js. But there are a few flaws with this, if the block is via the HOST file. The tracking image and other requests made to the server will be blocked any ways. The solution should be to make an A record pointing to the Google Analytics server. This way, users will send requests to the same server but via the domain you control. (e.g. urchin.myserver.net)

Often you will see “Waiting for response from google-analytics.com…” or something similar to this in your browser. So a mix of hosting urchin.js yourself and redirecting via a domain you control could also have an added benefit of speeding up loading of some pages. I know many sites I visit that use third-party tracking sites often take some time to finish loading. Which is a problem for me as I have Opera set to re-render after the page is loaded.

Plus, I’m sure users that run things like NoScript will be more likely to oblige to allowing stuff.trustedserver.com as opposed to Google. ;D

Written by Rodney G

01/3/2008 at 5:27 pm

Posted in Life

UserJS URL Sanitizing

with 5 comments

I was reading a post by RSnake over at Darkreading and got to thinking about client-side security.  There seems to be very little we can do against most things for the average user. NoScript is fine for a tech-minded individual, but the average user will probably forget about it and wonder why a site is now missing functionality.

So what do you think of some javascript that could check the URL for typically bad characters(since JS can easily find html-entities/url encoding/etc.)  and then sanitize them somehow? This could mean removing them or properly entifying them. Sure it’s fine. But even Greasemonkey scripts on run after a page is loaded. How could we do this?  Let’s take a look at UserJS in Opera.

User JavaScript is loaded and executed as if it were a part of the page that you visit. It is run immediately before the first script on the page. If the page does not contain any scripts of its own, User JavaScript will be executed immediately before the page is about to complete loading. It is usually run before the DOM for the page has been completed. (Note that this does not apply to Greasemonkey scripts. “….”User JavaScript will not be loaded on pages accessed using the opera: protocol. By default, it is also not loaded on pages accessed using the https: protocol.

Oh! So it should run before any other script is run. This is good. We can check to see if a script was injected, then proceed to remove it. But what if the injection is inside javascript? It will be hard to tell if it’s valid or not. Well since we are using UserJS already, let’s look at the UserJSEvent object and event listeners.

if( location.hostname.indexOf('example.com') != -1 ) {
  window.opera.addEventListener('BeforeScript',
   function (e) {
       e.element.text = e.element.text.replace(/!=\s*null/,'');
    },
    false
  );
}

BeforeScript
Fired before a SCRIPT element is executed. The script element is available as the element attribute of the UserJSEvent. The content of the script is available as the text property of the script element, and is also writable:

UserJSEvent.element.text = UserJSEvent.element.text.replace(/!=\s*null/,”)

So with this, we can check the text of a script object before it fires to sanitize it, which we could set to do only if it contains echoed content. Just a note that this isn’t restricted to off-site JS like it can be in some browsers. UserJS has full access to remote files accessed via script src even before it executes.  The hardest part is obviously sanitizing, but with some work I don’t see it being a huge issue for some basic XSS protection on the client-side.  I’m sure you could even expand it to  search all scripts for things that are commonly malicious like sending document.cookie somewhere to help protect against persistent XSS.

Anyways, I’d love to hear feedback on this idea before I go run off and make it.

Written by Rodney G

11/21/2007 at 6:03 pm

Posted in Security

Tagged with , , , , ,

Mobile Zombies, XSSWW, hack the planet?

leave a comment »

Warning, this post may be long, rant-like and totally off-target. 😛

While using bi-directional persistent communication channels to control browsers isn’t anything new,  nor is the  concept of a Cross Site Scripting Warhol Worm, but recently I have been thinking about them again. First off, earlier I was discussing in the #slackers irc channel, a concept regarding mobile zombies. I recently got a new phone to find out it has a fairly fast connection to the internet. Some phones can even reach 4.9MBits/s! This opens a whole new area, especially if malicious users can harness this. It seems at least 2.7 billion people own a mobile phone. If even only a small percentage of these users have high speed internet access, that’s still much more surface area for attack and data throughput. Plus, phones are often on longer than a home PC. “Follow the sun” no longer applies.

So enough information and theory, is this possible? Can we supplement mobile phones to use in a giant botnet? Well, to be honest,  I really have no idea. I have no statistics on what phones  can run JavaScript in their browser, which browser people are using for mobile browsing nor the resources to test any of this. But for the sake of this post, let’s assume at least 5%  of the 2.7  billion people have high speed internet on their mobile phone. That’s  135 million people. Since they are using a newer model of phone, let’s assume at least 80% of them have some sort of vulnerable web technology enabled on their phones. (JavaScript, Flash, Java (probably this…)) That’s still a little over 100 million phones. Now don’t get too excited, I doubt anyone could infect all of them. So how could we infect them? It’s pretty simple. Persistent XSS, tricking users into downloading Java viruses, etc.

So I went a little too in-depth on the mobile zombienet. Sue me. It seems possible and something to consider.

Anyways, back to the XSSWW. While RSnake claimed it wasn’t fiction in his post, at the time it seemed like the technologies and attacks that could be used for something like that didn’t really exist yet. Now they do. It doesn’t seem very far fetched, or hard for that matter. Here’s the little process my mind went over imagining how a worm like this would work. First one would need a few 0day XSS holes. Preferably at least one in a major forum software like phpBB or vBulletin and another in a web-based instant messaging service, such as MSN Web Messenger or Meebo.com. Obviously the initial attack would be over the forum software. It could use search engines to find other vulnerable installs of the forum to propagate. I imagine some sort of algorithm would be needed to choose a random result so the same forum wouldn’t be infected over and over so suddenly. Infected users would have their browser window hijacked with a full screen iframe so we could keep control longer, then zombified using attackapi or similar tools. Then we could use the CSS history hack to find which social networking sites, web-based instant messengers, etc, the user has visited that we have a vulnerability in. For an IM site, we could hijack the users list and find ways to infect them as well. Perhaps using a JavaScript XSS scanner or the PDF XSS to find a reflective XSS hole to use the CSS history hack on this stolen user list, to repeat the process.  Then of course we could do anything we wanted from DDoSes to using stolen MSN login credentials to send spam, or any of the other usual bad deeds.

Now the key problem with this situation is obviously losing control of zombies and network traffic overload to the channel. Since the scale would theoretically be huge, we could easily increase the interval of the requests to the channel immensely and only have one message in queue for all zombies at a time. Then you can change that message when you want to change objectives. Now assuming XSS vulnerabilities will be fixed and we couldn’t renew our supply of lost zombies, we would have a problem. Unless we created a JavaScript function that changed something in the worm. The propagation methods and the XSS vectors used. ;D Since we will have one or more central control locations more than likely, another thing a client could request is a series of XSS vectors to try on specific sites, probably an XML document containing these things, as well as the next place to request details from. (Then you could compromise different servers all the time in an attempt to hide your own identity.)

So combining the new power of mobile zombies as well as some theory about how a Warhol worm would work, we have a very scary scenario. I really have no idea how to stop something like this. I think I’ll go unplug my Ethernet cord now.

P.S. Sorry if you read all of that.

Written by Rodney G

11/14/2007 at 8:02 pm

Posted in Security

Tagged with , , , ,

WASWiki and my return.

with one comment

I was going to originally post about ideas for learning grounds for web application security. But the sla.ckers IRC(#slackers on irc.irchighway.net), pointed me first to OWASP. I realized this was quite a goldmine of information already, but it doesn’t seem too newb friendly, plus much of it seems to be theory more than direct examples. So then kuza55 reminded me of webappsecwiki.com. It’s pretty bare, but I believe we can turn this site into a more practical learning site. It’s already going in the correct direction in my opinion.

Anyways, enough my my dreams of grandeur, I am going to start getting back into web application security. Aside from the trusted third party whitelisting issues(otherwise known as XSSing YouTube Mods) I talked about in the #slackers channel, I have not contributed much lately. Things are yet again more stable in my life so I have time to do research and whatnot now. I’m going to start using WordPress.com again for various reasons. First, it’s easier than hosting my own, although it may incur some security issues, I’m sure it will be nothing major. Secondly, it’s already linked to by several people. It has some PR. So I hope to be able to contribute more soon!

Written by Rodney G

11/13/2007 at 11:35 pm

Posted in Life, Security

Tagged with , ,

The Murky Science of Web Application Security

with 2 comments

Jeremiah had a talk with Simson Garfinkel about Web Application Security recently. You can read Jeremiah’s post here and the full article here.

There is nothing new at all from a security perspective in this article, but it really lives up to it’s name as an introduction to Web App Sec. It points out a few things we already knew, such as the scary fact that up to 80 percent of all websites suffer from some sort of vulnerability. The ones that don’t are mostly static html sites and have no complex backend, ‘brochure-ware’ as the article calls it.

It also elaborates on some of the issues that must be faced, such as a need for secure coding. It’s pretty bad practice in most cases,(but not all) to just slap on a WAF and hope for the best. As this quote points out..

 Yes, it would be nice to eliminate these well-known bugs with better coding practices. But we live in the real world. It’s better to look for the bugs and fix them than to simply cross your fingers and hope that they aren’t there.

So all in all, if you’re a frustrated web app sec guy, this is a great article to show the higher ups. Murky indeed. As RSnake would say, clear as mud?

Written by Rodney G

05/14/2007 at 11:00 am

Posted in Uncategorized

Tagged with