No sooner than I made my previous post that I ran across this excellent analysis of a paper written by Peter Gutmann describing why DRM is bad. This is of course a massive oversimplification of the paper, so I suggest you read the analysis and the paper.
DRM and trusted computing in general is very interesting to me as it has a massive impact on what I do. I am not just saying that because I just started auditing the trusted computing capabilities for Vista (including Bitlocker, those guys really put a lot of thought into different possible attack scenarios). People who write DRM software don’t want people like me poking around in their process address space with my fancy debuggers and stuff like that. This means that doing things like reversing applications and tracing their execution flow will get hard which means that finding and writing exploits for bugs will get harder. Keep in mind this doesn’t mean that the bugs will go away; it means that new techniques for finding them will be developed.
Looking at the use of 0day in targeted attacks these days if I were doing bug hunting for the money I would be targeting DRM apps like crazy now as finding a vuln would give you something of greater value because you know it won’t be easily duplicated, it would be hard to track down, and that a fix would not be anything that a vendor could turn around very quickly.