Mac & i: Apple has demonstrated on iOS that there are ways to at least make it much more difficult for attackers to exploit security holes. Is this (likely) dependency on older code you mentioned also the reason why Apple doesn't push more for use of the sandbox and Seatbelt under OS X? Safari Extensions seem to be sandboxed, for example, but Safari as a whole appears to be unprotected...
DDZ: I believe that the enhanced security unique to iOS (i.e. boot chain of trust, comprehensive application sandboxing, code signing enforcement) was primarily designed to protect Apple's business model rather than the user's data. This gives Apple a strong economic incentive to work pretty hard at it. Whereas with protecting a user's data, their return on investment is less direct. Also, one must still keep in mind that malware threats in the wild against Mac OS X are still quite rare. As the Mac's marketshare increases, hopefully Apple will implement more enhanced security features to stay ahead of the threats.
CM: Apple could definitely sandbox some applications to make it tougher on attackers, or at least make an option for this (for example, an option to run Safari in "high security mode" or something). The problem, when you look at something like Safari, is it is designed to do so many things, that it is hard to make sandbox rules that would limit malware. Safari can "Open" or "Save" files, so exploits/malware will be able to do so. Safari can execute other programs, so exploits will be able to do so, etc. So it is tough to make a generic sandbox for Safari. However, something like the Flash plugin for Safari would be easy to sandbox (it runs in a separate process) and I don't know why they haven't done this.
Mac & i: Flash indeed seems to be a major stability problem and should be sandboxed – I have Safari on my MacBook Air crashing all the time because of Flash. Let's not suppose Apple does it on purpose... While we're at it, Google's browser sandboxes its processes, so it's definitely feasible. Still, they pay a few hundred up to a few thousand bucks for vulnerabilities in Chrome. Does Apple have a similar policy of rewarding security specialists for their work?
DDZ: Safari runs a number of legacy 32-bit plug-ins out-of-process, including Flash and QuickTime, when Safari is running as a 64-bit process on Snow Leopard. While this was primarily done for compatibility, it does have some stability benefits as a crashed plug-in will not take the rest of the web browser down with it. Google's Chrome web browser was designed from the ground-up with a multi-process model for isolating renderer process and plug-ins from each other as well as to support sandboxing. It is likely a significant amount of work to migrate an existing web browser code base (i.e. Safari, Firefox, or Internet Explorer) to a fully multi-process model.
Google has been quite progressive in using bug bounty programs to financially incentivize external researchers to take a look at a number of their products and report their findings. Apple hires information security engineers and consultants, but they have not yet set up a similar program to reward external researchers for voluntary vulnerability reports.
CM: No, Apple doesn't pay researchers. Apple takes the view that they don't have a security problem and don't need to work with researchers. Compare this with Microsoft who sponsor information security conferences, bring in researchers to speak to their people, throw parties for researchers, have a research team that releases papers and tools. Apple does none of this and has very little communication with security researchers. I don't think I've ever received an email or phone call from Apple that was in response to something I submitted to them. This is not the case for other companies like MS or Adobe who openly communicate with me to see what I'm working on, if I have ideas to help them, etc.
Mac & i: Compatibility with an existing code base surely is a double-edged sword. And while it may pay off for users, it may also impede progress and innovation, which is not always a good thing when talking about software – and which probably comes a bit unexpected when talking about Apple, the industry's main innovator.
Jacob Applebaum, known for his cold boot attacks, once commented that he had sent in a bug report about a password visibility in loginwindow.app, and it turned out to be a duplicate, two and a half million bugs later (see http://www.securityfocus.com/archive/1/488930). We heard the bug may still be open – would you say this is symptomatic of Apple's security focus, or probably just an unfortunate episode?
CM: I've found Apple security pretty responsive to bugs I've reported, but then again, I tend to think I get better than average service from them :) In a related issue, Apple has had some issues keeping open source components of their operating system up to date. Examples that come to mind include PCRE, Samba, Java. There are generally well known vulnerabilities, sometimes with known exploits, for these packages that aren't up to date in OS X.
DDZ: Apple has been pretty responsive to my bug submissions as well and, like Charlie, I've found that building a reputation with them over time by sending in well-analysed vulnerabilities definitely helps. Even then, the communication is always one-way and they will contact you if they need more information, otherwise it's pretty silent. These days, I'd rather just send my spare time findings to someone like ZDI and let them manage communication with the vendor for me so that I can get on with my life.